WorldWideScience

Sample records for effort estimation models

  1. Estimation of inspection effort

    International Nuclear Information System (INIS)

    Mullen, M.F.; Wincek, M.A.

    1979-06-01

    An overview of IAEA inspection activities is presented, and the problem of evaluating the effectiveness of an inspection is discussed. Two models are described - an effort model and an effectiveness model. The effort model breaks the IAEA's inspection effort into components; the amount of effort required for each component is estimated; and the total effort is determined by summing the effort for each component. The effectiveness model quantifies the effectiveness of inspections in terms of probabilities of detection and quantities of material to be detected, if diverted over a specific period. The method is applied to a 200 metric ton per year low-enriched uranium fuel fabrication facility. A description of the model plant is presented, a safeguards approach is outlined, and sampling plans are calculated. The required inspection effort is estimated and the results are compared to IAEA estimates. Some other applications of the method are discussed briefly. Examples are presented which demonstrate how the method might be useful in formulating guidelines for inspection planning and in establishing technical criteria for safeguards implementation

  2. Estimation of total Effort and Effort Elapsed in Each Step of Software Development Using Optimal Bayesian Belief Network

    Directory of Open Access Journals (Sweden)

    Fatemeh Zare Baghiabad

    2017-09-01

    Full Text Available Accuracy in estimating the needed effort for software development caused software effort estimation to be a challenging issue. Beside estimation of total effort, determining the effort elapsed in each software development step is very important because any mistakes in enterprise resource planning can lead to project failure. In this paper, a Bayesian belief network was proposed based on effective components and software development process. In this model, the feedback loops are considered between development steps provided that the return rates are different for each project. Different return rates help us determine the percentages of the elapsed effort in each software development step, distinctively. Moreover, the error measurement resulted from optimized effort estimation and the optimal coefficients to modify the model are sought. The results of the comparison between the proposed model and other models showed that the model has the capability to highly accurately estimate the total effort (with the marginal error of about 0.114 and to estimate the effort elapsed in each software development step.

  3. AN ENHANCED MODEL TO ESTIMATE EFFORT, PERFORMANCE AND COST OF THE SOFTWARE PROJECTS

    Directory of Open Access Journals (Sweden)

    M. Pauline

    2013-04-01

    Full Text Available The Authors have proposed a model that first captures the fundamentals of software metrics in the phase 1 consisting of three primitive primary software engineering metrics; they are person-months (PM, function-points (FP, and lines of code (LOC. The phase 2 consists of the proposed function point which is obtained by grouping the adjustment factors to simplify the process of adjustment and to ensure more consistency in the adjustments. In the proposed method fuzzy logic is used for quantifying the quality of requirements and is added as one of the adjustment factor, thus a fuzzy based approach for the Enhanced General System Characteristics to Estimate Effort of the Software Projects using productivity has been obtained. The phase 3 takes the calculated function point from our work and is given as input to the static single variable model (i.e. to the Intermediate COCOMO and COCOMO II for cost estimation. The Authors have tailored the cost factors in intermediate COCOMO and both; cost and scale factors are tailored in COCOMO II to suite to the individual development environment, which is very important for the accuracy of the cost estimates. The software performance indicators are project duration, schedule predictability, requirements completion ratio and post-release defect density, are also measured for the software projects in my work. A comparative study for effort, performance measurement and cost estimation of the software project is done between the existing model and the authors proposed work. Thus our work analyzes the interaction¬al process through which the estimation tasks were collectively accomplished.

  4. APPLYING TEACHING-LEARNING TO ARTIFICIAL BEE COLONY FOR PARAMETER OPTIMIZATION OF SOFTWARE EFFORT ESTIMATION MODEL

    Directory of Open Access Journals (Sweden)

    THANH TUNG KHUAT

    2017-05-01

    Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.

  5. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation

    Directory of Open Access Journals (Sweden)

    Senthil Kumar Murugesan

    2015-01-01

    Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.

  6. Practitioner's knowledge representation a pathway to improve software effort estimation

    CERN Document Server

    Mendes, Emilia

    2014-01-01

    The main goal of this book is to help organizations improve their effort estimates and effort estimation processes by providing a step-by-step methodology that takes them through the creation and validation of models that are based on their own knowledge and experience. Such models, once validated, can then be used to obtain predictions, carry out risk analyses, enhance their estimation processes for new projects and generally advance them as learning organizations.Emilia Mendes presents the Expert-Based Knowledge Engineering of Bayesian Networks (EKEBNs) methodology, which she has used and adapted during the course of several industry collaborations with different companies world-wide over more than 6 years. The book itself consists of two major parts: first, the methodology's foundations in knowledge management, effort estimation (with special emphasis on the intricacies of software and Web development) and Bayesian networks are detailed; then six industry case studies are presented which illustrate the pra...

  7. Software project effort estimation foundations and best practice guidelines for success

    CERN Document Server

    Trendowicz, Adam

    2014-01-01

    Software effort estimation is one of the oldest and most important problems in software project management, and thus today there are a large number of models, each with its own unique strengths and weaknesses in general, and even more importantly, in relation to the environment and context in which it is to be applied.Trendowicz and Jeffery present a comprehensive look at the principles of software effort estimation and support software practitioners in systematically selecting and applying the most suitable effort estimation approach. Their book not only presents what approach to take and how

  8. A model to estimate cost-savings in diabetic foot ulcer prevention efforts.

    Science.gov (United States)

    Barshes, Neal R; Saedi, Samira; Wrobel, James; Kougias, Panos; Kundakcioglu, O Erhun; Armstrong, David G

    2017-04-01

    Sustained efforts at preventing diabetic foot ulcers (DFUs) and subsequent leg amputations are sporadic in most health care systems despite the high costs associated with such complications. We sought to estimate effectiveness targets at which cost-savings (i.e. improved health outcomes at decreased total costs) might occur. A Markov model with probabilistic sensitivity analyses was used to simulate the five-year survival, incidence of foot complications, and total health care costs in a hypothetical population of 100,000 people with diabetes. Clinical event and cost estimates were obtained from previously-published trials and studies. A population without previous DFU but with 17% neuropathy and 11% peripheral artery disease (PAD) prevalence was assumed. Primary prevention (PP) was defined as reducing initial DFU incidence. PP was more than 90% likely to provide cost-savings when annual prevention costs are less than $50/person and/or annual DFU incidence is reduced by at least 25%. Efforts directed at patients with diabetes who were at moderate or high risk for DFUs were very likely to provide cost-savings if DFU incidence was decreased by at least 10% and/or the cost was less than $150 per person per year. Low-cost DFU primary prevention efforts producing even small decreases in DFU incidence may provide the best opportunity for cost-savings, especially if focused on patients with neuropathy and/or PAD. Mobile phone-based reminders, self-identification of risk factors (ex. Ipswich touch test), and written brochures may be among such low-cost interventions that should be investigated for cost-savings potential. Published by Elsevier Inc.

  9. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival.

    Directory of Open Access Journals (Sweden)

    Ziya Kordjazi

    Full Text Available Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day and four levels of the number of sampling-days (2, 4, 6 and 7 days. The most parsimonious Cormack-Jolly-Seber (CJS model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery.

  10. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival

    Science.gov (United States)

    Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas

    2016-01-01

    Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561

  11. Effort Estimation in BPMS Migration

    OpenAIRE

    Drews, Christopher; Lantow, Birger

    2018-01-01

    Usually Business Process Management Systems (BPMS) are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation re...

  12. Effort Estimation in BPMS Migration

    Directory of Open Access Journals (Sweden)

    Christopher Drews

    2018-04-01

    Full Text Available Usually Business Process Management Systems (BPMS are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation regarding the technical aspects of BPMS migration. The framework provides questions for BPMS comparison and an effort evaluation schema. The applicability of the framework is evaluated based on a simplified BPMS migration scenario.

  13. Supporting Analogy-based Effort Estimation with the Use of Ontologies

    Directory of Open Access Journals (Sweden)

    Joanna Kowalska

    2014-06-01

    Full Text Available The paper concerns effort estimation of software development projects, in particular, at the level of product delivery stages. It proposes a new approach to model project data to support expert-supervised analogy-based effort estimation. The data is modeled using Semantic Web technologies, such as Resource Description Framework (RDF and Ontology Language for the Web (OWL. Moreover, in the paper, we define a method of supervised case-based reasoning. The method enables to search for similar projects’ tasks at different levels of abstraction. For instance, instead of searching for a task performed by a specific person, one could look for tasks performed by people with similar capabilities. The proposed method relies on ontology that defines the core concepts and relationships. However, it is possible to introduce new classes and relationships, without the need of altering the search mechanisms. Finally, we implemented a prototype tool that was used to preliminary validate the proposed approach. We observed that the proposed approach could potentially help experts in estimating non-trivial tasks that are often underestimated.

  14. Impact of Base Functional Component Types on Software Functional Size based Effort Estimation

    OpenAIRE

    Gencel, Cigdem; Buglione, Luigi

    2008-01-01

    Software effort estimation is still a significant challenge for software management. Although Functional Size Measurement (FSM) methods have been standardized and have become widely used by the software organizations, the relationship between functional size and development effort still needs further investigation. Most of the studies focus on the project cost drivers and consider total software functional size as the primary input to estimation models. In this study, we investigate whether u...

  15. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  16. Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements

    Science.gov (United States)

    Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga

    The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.

  17. Glass Property Models and Constraints for Estimating the Glass to be Produced at Hanford by Implementing Current Advanced Glass Formulation Efforts

    Energy Technology Data Exchange (ETDEWEB)

    Vienna, John D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kim, Dong-Sang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Skorski, Daniel C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Matyas, Josef [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-07-01

    Recent glass formulation and melter testing data have suggested that significant increases in waste loading in HLW and LAW glasses are possible over current system planning estimates. The data (although limited in some cases) were evaluated to determine a set of constraints and models that could be used to estimate the maximum loading of specific waste compositions in glass. It is recommended that these models and constraints be used to estimate the likely HLW and LAW glass volumes that would result if the current glass formulation studies are successfully completed. It is recognized that some of the models are preliminary in nature and will change in the coming years. Plus the models do not currently address the prediction uncertainties that would be needed before they could be used in plant operations. The models and constraints are only meant to give an indication of rough glass volumes and are not intended to be used in plant operation or waste form qualification activities. A current research program is in place to develop the data, models, and uncertainty descriptions for that purpose. A fundamental tenet underlying the research reported in this document is to try to be less conservative than previous studies when developing constraints for estimating the glass to be produced by implementing current advanced glass formulation efforts. The less conservative approach documented herein should allow for the estimate of glass masses that may be realized if the current efforts in advanced glass formulations are completed over the coming years and are as successful as early indications suggest they may be. Because of this approach there is an unquantifiable uncertainty in the ultimate glass volume projections due to model prediction uncertainties that has to be considered along with other system uncertainties such as waste compositions and amounts to be immobilized, split factors between LAW and HLW, etc.

  18. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  19. Effort estimation for enterprise resource planning implementation projects using social choice - a comparative study

    Science.gov (United States)

    Koch, Stefan; Mitlöhner, Johann

    2010-08-01

    ERP implementation projects have received enormous attention in the last years, due to their importance for organisations, as well as the costs and risks involved. The estimation of effort and costs associated with new projects therefore is an important topic. Unfortunately, there is still a lack of models that can cope with the special characteristics of these projects. As the main focus lies in adapting and customising a complex system, and even changing the organisation, traditional models like COCOMO can not easily be applied. In this article, we will apply effort estimation based on social choice in this context. Social choice deals with aggregating the preferences of a number of voters into a collective preference, and we will apply this idea by substituting the voters by project attributes. Therefore, instead of supplying numeric values for various project attributes, a new project only needs to be placed into rankings per attribute, necessitating only ordinal values, and the resulting aggregate ranking can be used to derive an estimation. We will describe the estimation process using a data set of 39 projects, and compare the results to other approaches proposed in the literature.

  20. SEffEst: Effort estimation in software projects using fuzzy logic and neural networks

    Directory of Open Access Journals (Sweden)

    Israel

    2012-08-01

    Full Text Available Academia and practitioners confirm that software project effort prediction is crucial for an accurate software project management. However, software development effort estimation is uncertain by nature. Literature has developed methods to improve estimation correctness, using artificial intelligence techniques in many cases. Following this path, this paper presents SEffEst, a framework based on fuzzy logic and neural networks designed to increase effort estimation accuracy on software development projects. Trained using ISBSG data, SEffEst presents remarkable results in terms of prediction accuracy.

  1. Use of probabilistic methods for estimating failure probabilities and directing ISI-efforts

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, F; Brickstad, B [University of Uppsala, (Switzerland)

    1988-12-31

    Some general aspects of the role of Non Destructive Testing (NDT) efforts on the resulting probability of core damage is discussed. A simple model for the estimation of the pipe break probability due to IGSCC is discussed. It is partly based on analytical procedures, partly on service experience from the Swedish BWR program. Estimates of the break probabilities indicate that further studies are urgently needed. It is found that the uncertainties about the initial crack configuration are large contributors to the total uncertainty. Some effects of the inservice inspection are studied and it is found that the detection probabilities influence the failure probabilities. (authors).

  2. Eksperimen Seleksi Fitur Pada Parameter Proyek Untuk Software Effort Estimation dengan K-Nearest Neighbor

    Directory of Open Access Journals (Sweden)

    Fachruddin Fachruddin

    2017-07-01

    Full Text Available Software Effort Estimation adalah proses estimasi biaya perangkat lunak sebagai suatu proses penting dalam melakukan proyek perangkat lunak. Berbagai penelitian terdahulu telah melakukan estimasi usaha perangkat lunak dengan berbagai metode, baik metode machine learning  maupun non machine learning. Penelitian ini mengadakan set eksperimen seleksi atribut pada parameter proyek menggunakan teknik k-nearest neighbours sebagai estimasinya dengan melakukan seleksi atribut menggunakan information gain dan mutual information serta bagaimana menemukan  parameter proyek yang paling representif pada software effort estimation. Dataset software estimation effort yang digunakan pada eksperimen adalah  yakni albrecht, china, kemerer dan mizayaki94 yang dapat diperoleh dari repositori data khusus Software Effort Estimation melalui url http://openscience.us/repo/effort/. Selanjutnya peneliti melakukan pembangunan aplikasi seleksi atribut untuk menyeleksi parameter proyek. Sistem ini menghasilkan dataset arff yang telah diseleksi. Aplikasi ini dibangun dengan bahasa java menggunakan IDE Netbean. Kemudian dataset yang telah di-generate merupakan parameter hasil seleksi yang akan dibandingkan pada saat melakukan Software Effort Estimation menggunakan tool WEKA . Seleksi Fitur berhasil menurunkan nilai error estimasi (yang diwakilkan oleh nilai RAE dan RMSE. Artinya bahwa semakin rendah nilai error (RAE dan RMSE maka semakin akurat nilai estimasi yang dihasilkan. Estimasi semakin baik setelah di lakukan seleksi fitur baik menggunakan information gain maupun mutual information. Dari nilai error yang dihasilkan maka dapat disimpulkan bahwa dataset yang dihasilkan seleksi fitur dengan metode information gain lebih baik dibanding mutual information namun, perbedaan keduanya tidak terlalu signifikan.

  3. A technique for estimating maximum harvesting effort in a stochastic ...

    Indian Academy of Sciences (India)

    Unknown

    Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.

  4. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  5. Relative contributions of sampling effort, measuring, and weighing to precision of larval sea lamprey biomass estimates

    Science.gov (United States)

    Slade, Jeffrey W.; Adams, Jean V.; Cuddy, Douglas W.; Neave, Fraser B.; Sullivan, W. Paul; Young, Robert J.; Fodale, Michael F.; Jones, Michael L.

    2003-01-01

    We developed two weight-length models from 231 populations of larval sea lampreys (Petromyzon marinus) collected from tributaries of the Great Lakes: Lake Ontario (21), Lake Erie (6), Lake Huron (67), Lake Michigan (76), and Lake Superior (61). Both models were mixed models, which used population as a random effect and additional environmental factors as fixed effects. We resampled weights and lengths 1,000 times from data collected in each of 14 other populations not used to develop the models, obtaining a weight and length distribution from reach resampling. To test model performance, we applied the two weight-length models to the resampled length distributions and calculated the predicted mean weights. We also calculated the observed mean weight for each resampling and for each of the original 14 data sets. When the average of predicted means was compared to means from the original data in each stream, inclusion of environmental factors did not consistently improve the performance of the weight-length model. We estimated the variance associated with measures of abundance and mean weight for each of the 14 selected populations and determined that a conservative estimate of the proportional contribution to variance associated with estimating abundance accounted for 32% to 95% of the variance (mean = 66%). Variability in the biomass estimate appears more affected by variability in estimating abundance than in converting length to weight. Hence, efforts to improve the precision of biomass estimates would be aided most by reducing the variability associated with estimating abundance.

  6. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  7. Investigating the Nature of Relationship between Software Size and Development Effort

    OpenAIRE

    Bajwa, Sohaib-Shahid

    2008-01-01

    Software effort estimation still remains a challenging and debatable research area. Most of the software effort estimation models take software size as the base input. Among the others, Constructive Cost Model (COCOMO II) is a widely known effort estimation model. It uses Source Lines of Code (SLOC) as the software size to estimate effort. However, many problems arise while using SLOC as a size measure due to its late availability in the software life cycle. Therefore, a lot of research has b...

  8. A Method for A Priori Implementation Effort Estimation for Hardware Design

    DEFF Research Database (Denmark)

    Abildgren, Rasmus; Diguet, Jean-Philippe; Gogniat, Guy

    2008-01-01

    This paper presents a metric-based approach for estimating the hardware implementation effort (in terms of time) for an application in relation to the number of independent paths of its algorithms. We define a metric which exploits the relation between the number of independent paths in an algori...... facilitating designers and managers needs for estimating the time-to-market schedule....

  9. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  10. Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements

    NARCIS (Netherlands)

    Kassab, M.; Daneva, Maia; Ormanjieva, Olga; Abran, A.; Braungarten, R.; Dumke, R.; Cuadrado-Gallego, J.; Brunekreef, J.

    2009-01-01

    The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient

  11. A Priori Implementation Effort Estimation for HW Design Based on Independent-Path Analysis

    DEFF Research Database (Denmark)

    Abildgren, Rasmus; Diguet, Jean-Philippe; Bomel, Pierre

    2008-01-01

    that with the proposed approach it is possible to estimate the hardware implementation effort. This approach, part of our light design space exploration concept, is implemented in our framework ‘‘Design-Trotter'' and offers a new type of tool that can help designers and managers to reduce the time-to-market factor......This paper presents a metric-based approach for estimating the hardware implementation effort (in terms of time) for an application in relation to the number of linear-independent paths of its algorithms. We exploit the relation between the number of edges and linear-independent paths...... in an algorithm and the corresponding implementation effort. We propose an adaptation of the concept of cyclomatic complexity, complemented with a correction function to take designers' learning curve and experience into account. Our experimental results, composed of a training and a validation phase, show...

  12. An experience report on ERP effort estimation driven by quality requirements

    NARCIS (Netherlands)

    Erasmus, Pierre; Daneva, Maya; Schockert, Sixten

    2015-01-01

    Producing useful and accurate project effort estimates is highly dependable on the proper definition of the project scope. In the ERP service industry, the scope of an ERP service project is determined by desired needs which are driven by certain quality attributes that the client expects to be

  13. SOFTWARE EFFORT ESTIMATION FRAMEWORK TO IMPROVE ORGANIZATION PRODUCTIVITY USING EMOTION RECOGNITION OF SOFTWARE ENGINEERS IN SPONTANEOUS SPEECH

    Directory of Open Access Journals (Sweden)

    B.V.A.N.S.S. Prabhakar Rao

    2015-10-01

    Full Text Available Productivity is a very important part of any organisation in general and software industry in particular. Now a day’s Software Effort estimation is a challenging task. Both Effort and Productivity are inter-related to each other. This can be achieved from the employee’s of the organization. Every organisation requires emotionally stable employees in their firm for seamless and progressive working. Of course, in other industries this may be achieved without man power. But, software project development is labour intensive activity. Each line of code should be delivered from software engineer. Tools and techniques may helpful and act as aid or supplementary. Whatever be the reason software industry has been suffering with success rate. Software industry is facing lot of problems in delivering the project on time and within the estimated budget limit. If we want to estimate the required effort of the project it is significant to know the emotional state of the team member. The responsibility of ensuring emotional contentment falls on the human resource department and the department can deploy a series of systems to carry out its survey. This analysis can be done using a variety of tools, one such, is through study of emotion recognition. The data needed for this is readily available and collectable and can be an excellent source for the feedback systems. The challenge of recognition of emotion in speech is convoluted primarily due to the noisy recording condition, the variations in sentiment in sample space and exhibition of multiple emotions in a single sentence. The ambiguity in the labels of training set also increases the complexity of problem addressed. The existing models using probabilistic models have dominated the study but present a flaw in scalability due to statistical inefficiency. The problem of sentiment prediction in spontaneous speech can thus be addressed using a hybrid system comprising of a Convolution Neural Network and

  14. An improved COCOMO software cost estimation model | Duke ...

    African Journals Online (AJOL)

    In this paper, we discuss the methodologies adopted previously in software cost estimation using the COnstructive COst MOdels (COCOMOs). From our analysis, COCOMOs produce very high software development efforts, which eventually produce high software development costs. Consequently, we propose its extension, ...

  15. Influence of Sampling Effort on the Estimated Richness of Road-Killed Vertebrate Wildlife

    Science.gov (United States)

    Bager, Alex; da Rosa, Clarissa A.

    2011-05-01

    Road-killed mammals, birds, and reptiles were collected weekly from highways in southern Brazil in 2002 and 2005. The objective was to assess variation in estimates of road-kill impacts on species richness produced by different sampling efforts, and to provide information to aid in the experimental design of future sampling. Richness observed in weekly samples was compared with sampling for different periods. In each period, the list of road-killed species was evaluated based on estimates the community structure derived from weekly samplings, and by the presence of the ten species most subject to road mortality, and also of threatened species. Weekly samples were sufficient only for reptiles and mammals, considered separately. Richness estimated from the biweekly samples was equal to that found in the weekly samples, and gave satisfactory results for sampling the most abundant and threatened species. The ten most affected species showed constant road-mortality rates, independent of sampling interval, and also maintained their dominance structure. Birds required greater sampling effort. When the composition of road-killed species varies seasonally, it is necessary to take biweekly samples for a minimum of one year. Weekly or more-frequent sampling for periods longer than two years is necessary to provide a reliable estimate of total species richness.

  16. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  17. Competing probabilistic models for catch-effort relationships in wildlife censuses

    Energy Technology Data Exchange (ETDEWEB)

    Skalski, J.R.; Robson, D.S.; Matsuzaki, C.L.

    1983-01-01

    Two probabilistic models are presented for describing the chance that an animal is captured during a wildlife census, as a function of trapping effort. The models in turn are used to propose relationships between sampling intensity and catch-per-unit-effort (C.P.U.E.) that were field tested on small mammal populations. Capture data suggests a model of diminshing C.P.U.E. with increasing levels of trapping intensity. The catch-effort model is used to illustrate optimization procedures in the design of mark-recapture experiments for censusing wild populations. 14 references, 2 tables.

  18. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  19. Los Alamos Waste Management Cost Estimation Model

    International Nuclear Information System (INIS)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs

  20. A practical approach to parameter estimation applied to model predicting heart rate regulation

    DEFF Research Database (Denmark)

    Olufsen, Mette; Ottesen, Johnny T.

    2013-01-01

    Mathematical models have long been used for prediction of dynamics in biological systems. Recently, several efforts have been made to render these models patient specific. One way to do so is to employ techniques to estimate parameters that enable model based prediction of observed quantities....... Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...... a subset of model parameters reducing the complexity of the problem. In this study, we compare three methods that allow identification of parameter subsets that can be estimated given a model and a set of data. These methods will be used to estimate patient specific parameters in a model predicting...

  1. Modeling to Mars: a NASA Model Based Systems Engineering Pathfinder Effort

    Science.gov (United States)

    Phojanamongkolkij, Nipa; Lee, Kristopher A.; Miller, Scott T.; Vorndran, Kenneth A.; Vaden, Karl R.; Ross, Eric P.; Powell, Bobby C.; Moses, Robert W.

    2017-01-01

    The NASA Engineering Safety Center (NESC) Systems Engineering (SE) Technical Discipline Team (TDT) initiated the Model Based Systems Engineering (MBSE) Pathfinder effort in FY16. The goals and objectives of the MBSE Pathfinder include developing and advancing MBSE capability across NASA, applying MBSE to real NASA issues, and capturing issues and opportunities surrounding MBSE. The Pathfinder effort consisted of four teams, with each team addressing a particular focus area. This paper focuses on Pathfinder team 1 with the focus area of architectures and mission campaigns. These efforts covered the timeframe of February 2016 through September 2016. The team was comprised of eight team members from seven NASA Centers (Glenn Research Center, Langley Research Center, Ames Research Center, Goddard Space Flight Center IV&V Facility, Johnson Space Center, Marshall Space Flight Center, and Stennis Space Center). Collectively, the team had varying levels of knowledge, skills and expertise in systems engineering and MBSE. The team applied their existing and newly acquired system modeling knowledge and expertise to develop modeling products for a campaign (Program) of crew and cargo missions (Projects) to establish a human presence on Mars utilizing In-Situ Resource Utilization (ISRU). Pathfinder team 1 developed a subset of modeling products that are required for a Program System Requirement Review (SRR)/System Design Review (SDR) and Project Mission Concept Review (MCR)/SRR as defined in NASA Procedural Requirements. Additionally, Team 1 was able to perform and demonstrate some trades and constraint analyses. At the end of these efforts, over twenty lessons learned and recommended next steps have been identified.

  2. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  3. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  4. Towards estimation of respiratory muscle effort with respiratory inductance plethysmography signals and complementary ensemble empirical mode decomposition.

    Science.gov (United States)

    Chen, Ya-Chen; Hsiao, Tzu-Chien

    2018-07-01

    Respiratory inductance plethysmography (RIP) sensor is an inexpensive, non-invasive, easy-to-use transducer for collecting respiratory movement data. Studies have reported that the RIP signal's amplitude and frequency can be used to discriminate respiratory diseases. However, with the conventional approach of RIP data analysis, respiratory muscle effort cannot be estimated. In this paper, the estimation of the respiratory muscle effort through RIP signal was proposed. A complementary ensemble empirical mode decomposition method was used, to extract hidden signals from the RIP signals based on the frequency bands of the activities of different respiratory muscles. To validate the proposed method, an experiment to collect subjects' RIP signal under thoracic breathing (TB) and abdominal breathing (AB) was conducted. The experimental results for both the TB and AB indicate that the proposed method can be used to loosely estimate the activities of thoracic muscles, abdominal muscles, and diaphragm. Graphical abstract ᅟ.

  5. Brain waves-based index for workload estimation and mental effort engagement recognition

    Science.gov (United States)

    Zammouri, A.; Chraa-Mesbahi, S.; Ait Moussa, A.; Zerouali, S.; Sahnoun, M.; Tairi, H.; Mahraz, A. M.

    2017-10-01

    The advent of the communication systems and considering the complexity that some impose in their use, it is necessary to incorporate and equip these systems with a certain intelligence which takes into account the cognitive and mental capacities of the human operator. In this work, we address the issue of estimating the mental effort of an operator according to the cognitive tasks difficulty levels. Based on the Electroencephalogram (EEG) measurements, the proposed approach analyzes the user’s brain activity from different brain regions while performing cognitive tasks with several levels of difficulty. At a first time, we propose a variances comparison-based classifier (VCC) that makes use of the Power Spectral Density (PSD) of the EEG signal. The aim of using such a classifier is to highlight the brain regions that enter into interaction according to the cognitive task difficulty. In a second time, we present and describe a new EEG-based index for the estimation of mental efforts. The designed index is based on information recorded from two EEG channels. Results from the VCC demonstrate that powers of the Theta [4-7 Hz] (θ) and Alpha [8-12 Hz] (α) oscillations decrease while increasing the cognitive task difficulty. These decreases are mainly located in parietal and temporal brain regions. Based on the Kappa coefficients, decisions of the introduced index are compared to those obtained from an existing index. This performance assessment method revealed strong agreements. Hence the efficiency of the introduced index.

  6. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  7. Global parameter estimation for thermodynamic models of transcriptional regulation.

    Science.gov (United States)

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Incorporating Responsiveness to Marketing Efforts in Brand Choice Modeling

    Directory of Open Access Journals (Sweden)

    Dennis Fok

    2014-02-01

    Full Text Available We put forward a brand choice model with unobserved heterogeneity that concerns responsiveness to marketing efforts. We introduce two latent segments of households. The first segment is assumed to respond to marketing efforts, while households in the second segment do not do so. Whether a specific household is a member of the first or the second segment at a specific purchase occasion is described by household-specific characteristics and characteristics concerning buying behavior. Households may switch between the two responsiveness states over time. When comparing the performance of our model with alternative choice models that account for various forms of heterogeneity for three different datasets, we find better face validity for our parameters. Our model also forecasts better.

  9. The Estimated Health and Economic Benefits of Three Decades of Polio Elimination Efforts in India.

    Science.gov (United States)

    Nandi, Arindam; Barter, Devra M; Prinja, Shankar; John, T Jacob

    2016-08-07

    In March 2014, India, the country with historically the highest burden of polio, was declared polio free, with no reported cases since January 2011. We estimate the health and economic benefits of polio elimination in India with the oral polio vaccine (OPV) during 1982-2012. Based on a pre-vaccine incidence rate, we estimate the counterfactual burden of polio in the hypothetical absence of the national polio elimination program in India. We attribute differences in outcomes between the actual (adjusted for under-reporting) and hypothetical counterfactual scenarios in our model to the national polio program. We measure health benefits as averted polio incidence, deaths, and disability adjusted life years (DALYs). We consider two methods to measure economic benefits: the value of statistical life approach, and equating one DALY to the Gross National Income (GNI) per capita. We estimate that the National Program against Polio averted 3.94 million (95% confidence interval [CI]: 3.89-3.99 million) paralytic polio cases, 393,918 polio deaths (95% CI: 388,897- 398,939), and 1.48 billion DALYs (95% CI: 1.46-1.50 billion). We also estimate that the program contributed to a $1.71 trillion (INR 76.91 trillion) gain (95% CI: $1.69-$1.73 trillion [INR 75.93-77.89 trillion]) in economic productivity between 1982 and 2012 in our base case analysis. Using the GNI and DALY method, the economic gain from the program is estimated to be $1.11 trillion (INR 50.13 trillion) (95% CI: $1.10-$1.13 trillion [INR 49.50-50.76 trillion]) over the same period. India accrued large health and economic benefits from investing in polio elimination efforts. Other programs to control/eliminate more vaccine-preventable diseases are likely to contribute to large health and economic benefits in India.

  10. Modeling Student Motivation and Students’ Ability Estimates From a Large-Scale Assessment of Mathematics

    Directory of Open Access Journals (Sweden)

    Carlos Zerpa

    2011-09-01

    Full Text Available When large-scale assessments (LSA do not hold personal stakes for students, students may not put forth their best effort. Low-effort examinee behaviors (e.g., guessing, omitting items result in an underestimate of examinee abilities, which is a concern when using results of LSA to inform educational policy and planning. The purpose of this study was to explore the relationship between examinee motivation as defined by expectancy-value theory, student effort, and examinee mathematics abilities. A principal components analysis was used to examine the data from Grade 9 students (n = 43,562 who responded to a self-report questionnaire on their attitudes and practices related to mathematics. The results suggested a two-component model where the components were interpreted as task-values in mathematics and student effort. Next, a hierarchical linear model was implemented to examine the relationship between examinee component scores and their estimated ability on a LSA. The results of this study provide evidence that motivation, as defined by the expectancy-value theory and student effort, partially explains student ability estimates and may have implications in the information that get transferred to testing organizations, school boards, and teachers while assessing students’ Grade 9 mathematics learning.

  11. The U.S. Federal Government's Efforts to Estimate an Economic Value for Reduced Carbon Emissions (Invited)

    Science.gov (United States)

    Wolverton, A.

    2010-12-01

    This presentation will summarize the technical process and results from recent U.S. Federal government efforts to estimate the “social cost of carbon” (SCC); the monetized damages associated with an incremental increase in carbon dioxide emissions in a given year. The purpose of the SCC estimates is to make it possible for Federal agencies to incorporate the social benefits from reducing CO2 emissions into cost-benefit analyses of regulatory actions that have relatively small impacts on cumulative global emissions. An interagency working group initiated a comprehensive analysis using three integrated assessment models. The interagency group chose to rely on three of the most widely recognized peer-reviewed models to fairly represent differences in the way in which economic impacts from climate change are modeled (DICE, PAGE, and FUND). The main objective of this process was to develop a range of SCC values using a defensible set of input assumptions grounded in the existing scientific and economic literatures. In this way, key uncertainties and model differences transparently and consistently inform the range of SCC estimates used in the rulemaking process. This proved challenging since the literature did not always agree on the best path forward. In some cases the group agreed to a range of assumptions to allow for uncertainty analysis (e.g., they include 5 different socioeconomic scenarios in the Monte Carlo analysis to reflect uncertainty about how future economic and population growth and energy systems will develop over the next 100 years). The four values selected for regulatory analysis included three estimates based on the average SCC from three integrated assessment models over a range of discount rates, since there is wide disagreement on which to apply in an inter-generational context. The fourth value represents the 95th percentile SCC estimate across all three models at a 3 percent discount rate and is included to represent higher

  12. The Effort Paradox: Effort Is Both Costly and Valued.

    Science.gov (United States)

    Inzlicht, Michael; Shenhav, Amitai; Olivola, Christopher Y

    2018-04-01

    According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort's role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. A Planning Tool for Estimating Waste Generated by a Radiological Incident and Subsequent Decontamination Efforts - 13569

    International Nuclear Information System (INIS)

    Boe, Timothy; Lemieux, Paul; Schultheisz, Daniel; Peake, Tom; Hayes, Colin

    2013-01-01

    Management of debris and waste from a wide-area radiological incident would probably constitute a significant percentage of the total remediation cost and effort. The U.S. Environmental Protection Agency's (EPA's) Waste Estimation Support Tool (WEST) is a unique planning tool for estimating the potential volume and radioactivity levels of waste generated by a radiological incident and subsequent decontamination efforts. The WEST was developed to support planners and decision makers by generating a first-order estimate of the quantity and characteristics of waste resulting from a radiological incident. The tool then allows the user to evaluate the impact of various decontamination/demolition strategies on the waste types and volumes generated. WEST consists of a suite of standalone applications and Esri R ArcGIS R scripts for rapidly estimating waste inventories and levels of radioactivity generated from a radiological contamination incident as a function of user-defined decontamination and demolition approaches. WEST accepts Geographic Information System (GIS) shape-files defining contaminated areas and extent of contamination. Building stock information, including square footage, building counts, and building composition estimates are then generated using the Federal Emergency Management Agency's (FEMA's) Hazus R -MH software. WEST then identifies outdoor surfaces based on the application of pattern recognition to overhead aerial imagery. The results from the GIS calculations are then fed into a Microsoft Excel R 2007 spreadsheet with a custom graphical user interface where the user can examine the impact of various decontamination/demolition scenarios on the quantity, characteristics, and residual radioactivity of the resulting waste streams. (authors)

  14. LPS Catch and Effort Estimation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data collected from the LPS dockside (LPIS) and the LPS telephone (LPTS) surveys are combined to produce estimates of total recreational catch, landings, and fishing...

  15. Aircraft ground damage and the use of predictive models to estimate costs

    Science.gov (United States)

    Kromphardt, Benjamin D.

    Aircraft are frequently involved in ground damage incidents, and repair costs are often accepted as part of doing business. The Flight Safety Foundation (FSF) estimates ground damage to cost operators $5-10 billion annually. Incident reports, documents from manufacturers or regulatory agencies, and other resources were examined to better understand the problem of ground damage in aviation. Major contributing factors were explained, and two versions of a computer-based model were developed to project costs and show what is possible. One objective was to determine if the models could match the FSF's estimate. Another objective was to better understand cost savings that could be realized by efforts to further mitigate the occurrence of ground incidents. Model effectiveness was limited by access to official data, and assumptions were used if data was not available. However, the models were determined to sufficiently estimate the costs of ground incidents.

  16. A Planning Tool for Estimating Waste Generated by a Radiological Incident and Subsequent Decontamination Efforts - 13569

    Energy Technology Data Exchange (ETDEWEB)

    Boe, Timothy [Oak Ridge Institute for Science and Education, Research Triangle Park, NC 27711 (United States); Lemieux, Paul [U.S. Environmental Protection Agency, Research Triangle Park, NC 27711 (United States); Schultheisz, Daniel; Peake, Tom [U.S. Environmental Protection Agency, Washington, DC 20460 (United States); Hayes, Colin [Eastern Research Group, Inc, Morrisville, NC 26560 (United States)

    2013-07-01

    Management of debris and waste from a wide-area radiological incident would probably constitute a significant percentage of the total remediation cost and effort. The U.S. Environmental Protection Agency's (EPA's) Waste Estimation Support Tool (WEST) is a unique planning tool for estimating the potential volume and radioactivity levels of waste generated by a radiological incident and subsequent decontamination efforts. The WEST was developed to support planners and decision makers by generating a first-order estimate of the quantity and characteristics of waste resulting from a radiological incident. The tool then allows the user to evaluate the impact of various decontamination/demolition strategies on the waste types and volumes generated. WEST consists of a suite of standalone applications and Esri{sup R} ArcGIS{sup R} scripts for rapidly estimating waste inventories and levels of radioactivity generated from a radiological contamination incident as a function of user-defined decontamination and demolition approaches. WEST accepts Geographic Information System (GIS) shape-files defining contaminated areas and extent of contamination. Building stock information, including square footage, building counts, and building composition estimates are then generated using the Federal Emergency Management Agency's (FEMA's) Hazus{sup R}-MH software. WEST then identifies outdoor surfaces based on the application of pattern recognition to overhead aerial imagery. The results from the GIS calculations are then fed into a Microsoft Excel{sup R} 2007 spreadsheet with a custom graphical user interface where the user can examine the impact of various decontamination/demolition scenarios on the quantity, characteristics, and residual radioactivity of the resulting waste streams. (authors)

  17. City Logistics Modeling Efforts : Trends and Gaps - A Review

    NARCIS (Netherlands)

    Anand, N.R.; Quak, H.J.; Van Duin, J.H.R.; Tavasszy, L.A.

    2012-01-01

    In this paper, we present a review of city logistics modeling efforts reported in the literature for urban freight analysis. The review framework takes into account the diversity and complexity found in the present-day city logistics practice. Next, it covers the different aspects in the modeling

  18. Status of efforts to evaluate LOCA frequency estimates using combined PRA and PFM approaches

    International Nuclear Information System (INIS)

    Wilkowski, G.; Rudland, D.; Tregoning, R.; Scott, P.

    2002-01-01

    The risk-informed reevaluation of 10 CFR 50.46 (along with Appendix K and GDC 35), the emergency core cooling system (ECCS) requirements, utilizes loss of coolant accident (LOCA) initiating event frequencies to evaluate the technical basis for potential related rule changes. A longer-term effort is considering redefining the maximum design basis pipe break size for sizing the ECCS system. In the past few years, the U.S. Nuclear Regulatory Commission (NRC) has utilized NUREG/CR-5750 pipe-break LOCA estimated for initiating event frequencies. However, several failure mechanisms have recently emerged at plants which have not been evident within the service period covered by the NUREG/CR-5750 estimates. The concern is that these and other potential aging-related mechanisms may not be adequately represented within the NUREG/CR-5750 LOCA estimates. Additionally, LOCAs can occur from failure of active components (e.g. safety relief valves, reactor coolant pump seals, etc.) and other non-pipe break passive failures (e.g. steam generator tubes). The LOCA contributions from these additional sources must also be considered in deciding the design basis break size. The LOCA estimates must also attempt to capture expected future changes in the LOCA frequencies so that the estimates are pertinent up through the end of the license renewal period. (orig.)

  19. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  20. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  1. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  2. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  3. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Dan Selişteanu

    2015-01-01

    Full Text Available Monoclonal antibodies (mAbs are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

  4. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    Science.gov (United States)

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  5. Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models

    Science.gov (United States)

    Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.

    2014-12-01

    We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.

  6. Allometric Models Based on Bayesian Frameworks Give Better Estimates of Aboveground Biomass in the Miombo Woodlands

    Directory of Open Access Journals (Sweden)

    Shem Kuyah

    2016-02-01

    Full Text Available The miombo woodland is the most extensive dry forest in the world, with the potential to store substantial amounts of biomass carbon. Efforts to obtain accurate estimates of carbon stocks in the miombo woodlands are limited by a general lack of biomass estimation models (BEMs. This study aimed to evaluate the accuracy of most commonly employed allometric models for estimating aboveground biomass (AGB in miombo woodlands, and to develop new models that enable more accurate estimation of biomass in the miombo woodlands. A generalizable mixed-species allometric model was developed from 88 trees belonging to 33 species ranging in diameter at breast height (DBH from 5 to 105 cm using Bayesian estimation. A power law model with DBH alone performed better than both a polynomial model with DBH and the square of DBH, and models including height and crown area as additional variables along with DBH. The accuracy of estimates from published models varied across different sites and trees of different diameter classes, and was lower than estimates from our model. The model developed in this study can be used to establish conservative carbon stocks required to determine avoided emissions in performance-based payment schemes, for example in afforestation and reforestation activities.

  7. Enzymatic Synthesis of Ampicillin: Nonlinear Modeling, Kinetics Estimation, and Adaptive Control

    Directory of Open Access Journals (Sweden)

    Monica Roman

    2012-01-01

    Full Text Available Nowadays, the use of advanced control strategies in biotechnology is quite low. A main reason is the lack of quality of the data, and the fact that more sophisticated control strategies must be based on a model of the dynamics of bioprocesses. The nonlinearity of the bioprocesses and the absence of cheap and reliable instrumentation require an enhanced modeling effort and identification strategies for the kinetics. The present work approaches modeling and control strategies for the enzymatic synthesis of ampicillin that is carried out inside a fed-batch bioreactor. First, a nonlinear dynamical model of this bioprocess is obtained by using a novel modeling procedure for biotechnology: the bond graph methodology. Second, a high gain observer is designed for the estimation of the imprecisely known kinetics of the synthesis process. Third, by combining an exact linearizing control law with the on-line estimation kinetics algorithm, a nonlinear adaptive control law is designed. The case study discussed shows that a nonlinear feedback control strategy applied to the ampicillin synthesis bioprocess can cope with disturbances, noisy measurements, and parametric uncertainties. Numerical simulations performed with MATLAB environment are included in order to test the behavior and the performances of the proposed estimation and control strategies.

  8. V and V Efforts of Auroral Precipitation Models: Preliminary Results

    Science.gov (United States)

    Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael

    2011-01-01

    Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.

  9. Hybrid discrete choice models: Gained insights versus increasing effort

    Energy Technology Data Exchange (ETDEWEB)

    Mariel, Petr, E-mail: petr.mariel@ehu.es [UPV/EHU, Economía Aplicada III, Avda. Lehendakari Aguire, 83, 48015 Bilbao (Spain); Meyerhoff, Jürgen [Institute for Landscape Architecture and Environmental Planning, Technical University of Berlin, D-10623 Berlin, Germany and The Kiel Institute for the World Economy, Duesternbrooker Weg 120, 24105 Kiel (Germany)

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  10. Hybrid discrete choice models: Gained insights versus increasing effort

    International Nuclear Information System (INIS)

    Mariel, Petr; Meyerhoff, Jürgen

    2016-01-01

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.

  11. [Psychosocial factors at work and cardiovascular diseases: contribution of the Effort-Reward Imbalance model].

    Science.gov (United States)

    Niedhammer, I; Siegrist, J

    1998-11-01

    The effect of psychosocial factors at work on health, especially cardiovascular health, has given rise to growing concern in occupational epidemiology over the last few years. Two theoretical models, Karasek's model and the Effort-Reward Imbalance model, have been developed to evaluate psychosocial factors at work within specific conceptual frameworks in an attempt to take into account the serious methodological difficulties inherent in the evaluation of such factors. Karasek's model, the most widely used model, measures three factors: psychological demands, decision latitude and social support at work. Many studies have shown the predictive effects of these factors on cardiovascular diseases independently of well-known cardiovascular risk factors. More recently, the Effort-Reward Imbalance model takes into account the role of individual coping characteristics which was neglected in the Karasek model. The effort-reward imbalance model focuses on the reciprocity of exchange in occupational life where high-cost/low-gain conditions are considered particularly stressful. Three dimensions of rewards are distinguished: money, esteem and gratifications in terms of promotion prospects and job security. Some studies already support that high-effort/low reward-conditions are predictive of cardiovascular diseases.

  12. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  13. A source term estimation method for a nuclear accident using atmospheric dispersion models

    DEFF Research Database (Denmark)

    Kim, Minsik; Ohba, Ryohji; Oura, Masamichi

    2015-01-01

    The objective of this study is to develop an operational source term estimation (STE) method applicable for a nuclear accident like the incident that occurred at the Fukushima Dai-ichi nuclear power station in 2011. The new STE method presented here is based on data from atmospheric dispersion...... models and short-range observational data around the nuclear power plants.The accuracy of this method is validated with data from a wind tunnel study that involved a tracer gas release from a scaled model experiment at Tokai Daini nuclear power station in Japan. We then use the methodology developed...... and validated through the effort described in this manuscript to estimate the release rate of radioactive material from the Fukushima Dai-ichi nuclear power station....

  14. How long is enough to detect terrestrial animals? Estimating the minimum trapping effort on camera traps

    Directory of Open Access Journals (Sweden)

    Xingfeng Si

    2014-05-01

    Full Text Available Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period.

  15. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    Science.gov (United States)

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  16. A financial planning model for estimating hospital debt capacity.

    Science.gov (United States)

    Hopkins, D S; Heath, D; Levin, P J

    1982-01-01

    A computer-based financial planning model was formulated to measure the impact of a major capital improvement project on the fiscal health of Stanford University Hospital. The model had to be responsive to many variables and easy to use, so as to allow for the testing of numerous alternatives. Special efforts were made to identify the key variables that needed to be presented in the model and to include all known links between capital investment, debt, and hospital operating expenses. Growth in the number of patient days of care was singled out as a major source of uncertainty that would have profound effects on the hospital's finances. Therefore this variable was subjected to special scrutiny in terms of efforts to gauge expected demographic trends and market forces. In addition, alternative base runs of the model were made under three distinct patient-demand assumptions. Use of the model enabled planners at the Stanford University Hospital (a) to determine that a proposed modernization plan was financially feasible under a reasonable (that is, not unduly optimistic) set of assumptions and (b) to examine the major sources of risk. Other than patient demand, these sources were found to be gross revenues per patient, operating costs, and future limitations on government reimbursement programs. When the likely financial consequences of these risks were estimated, both separately and in combination, it was determined that even if two or more assumptions took a somewhat more negative turn than was expected, the hospital would be able to offset adverse consequences by a relatively minor reduction in operating costs. PMID:7111658

  17. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  18. Utilising temperature differences as constraints for estimating parameters in a simple climate model

    International Nuclear Information System (INIS)

    Bodman, Roger W; Karoly, David J; Enting, Ian G

    2010-01-01

    Simple climate models can be used to estimate the global temperature response to increasing greenhouse gases. Changes in the energy balance of the global climate system are represented by equations that necessitate the use of uncertain parameters. The values of these parameters can be estimated from historical observations, model testing, and tuning to more complex models. Efforts have been made at estimating the possible ranges for these parameters. This study continues this process, but demonstrates two new constraints. Previous studies have shown that land-ocean temperature differences are only weakly correlated with global mean temperature for natural internal climate variations. Hence, these temperature differences provide additional information that can be used to help constrain model parameters. In addition, an ocean heat content ratio can also provide a further constraint. A pulse response technique was used to identify relative parameter sensitivity which confirmed the importance of climate sensitivity and ocean vertical diffusivity, but the land-ocean warming ratio and the land-ocean heat exchange coefficient were also found to be important. Experiments demonstrate the utility of the land-ocean temperature difference and ocean heat content ratio for setting parameter values. This work is based on investigations with MAGICC (Model for the Assessment of Greenhouse-gas Induced Climate Change) as the simple climate model.

  19. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  20. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  1. Effort-reward imbalance and organisational injustice among aged nurses: a moderated mediation model.

    Science.gov (United States)

    Topa, Gabriela; Guglielmi, Dina; Depolo, Marco

    2016-09-01

    To test the effort-reward imbalance model among older nurses, expanding it to include the moderation of overcommitment and age in the stress-health complaints relationship, mediated by organisational injustice. The theoretical framework included the effort-reward imbalance, the uncertainty management and the socio-emotional selectivity models. Employing a two-wave design, the participants were 255 nurses aged 45 years and over, recruited from four large hospitals in Spain (Madrid and Basque Country). The direct effect of imbalance on health complaints was supported: it was significant when overcommitment was low but not when it was high. Organisational injustice mediated the influence of effort-reward imbalance on health complaints. The conditional effect of the mediation of organisational injustice was significant in three of the overcommitment/age conditions but it weakened, becoming non-significant, when the level of overcommitment was low and age was high. The study tested the model in nursing populations and expanded it to the settings of occupational health and safety at work. The results of this study highlight the importance of effort-reward imbalance and organisational justice for creating healthy work environments. © 2016 John Wiley & Sons Ltd.

  2. Synergies Between Grace and Regional Atmospheric Modeling Efforts

    Science.gov (United States)

    Kusche, J.; Springer, A.; Ohlwein, C.; Hartung, K.; Longuevergne, L.; Kollet, S. J.; Keune, J.; Dobslaw, H.; Forootan, E.; Eicker, A.

    2014-12-01

    In the meteorological community, efforts converge towards implementation of high-resolution (precipitation, evapotranspiration and runoff data; confirming that the model does favorably at representing observations. We show that after GRACE-derived bias correction, basin-average hydrological conditions prior to 2002 can be reconstructed better than before. Next, comparing GRACE with CLM forced by EURO-CORDEX simulations allows identifying processes needing improvement in the model. Finally, we compare COSMO-EU atmospheric pressure, a proxy for mass corrections in satellite gravimetry, with ERA-Interim over Europe at timescales shorter/longer than 1 month, and spatial scales below/above ERA resolution. We find differences between regional and global model more pronounced at high frequencies, with magnitude at sub-grid scale and larger scale corresponding to 1-3 hPa (1-3 cm EWH); relevant for the assessment of post-GRACE concepts.

  3. Integrating multiple distribution models to guide conservation efforts of an endangered toad

    Science.gov (United States)

    Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.

    2015-01-01

    Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.

  4. The effort-reward imbalance work-stress model and daytime salivary cortisol and dehydroepiandrosterone (DHEA) among Japanese women.

    Science.gov (United States)

    Ota, Atsuhiko; Mase, Junji; Howteerakul, Nopporn; Rajatanun, Thitipat; Suwannapong, Nawarat; Yatsuya, Hiroshi; Ono, Yuichiro

    2014-09-17

    We examined the influence of work-related effort-reward imbalance and overcommitment to work (OC), as derived from Siegrist's Effort-Reward Imbalance (ERI) model, on the hypothalamic-pituitary-adrenocortical (HPA) axis. We hypothesized that, among healthy workers, both cortisol and dehydroepiandrosterone (DHEA) secretion would be increased by effort-reward imbalance and OC and, as a result, cortisol-to-DHEA ratio (C/D ratio) would not differ by effort-reward imbalance or OC. The subjects were 115 healthy female nursery school teachers. Salivary cortisol, DHEA, and C/D ratio were used as indexes of HPA activity. Mixed-model analyses of variance revealed that neither the interaction between the ERI model indicators (i.e., effort, reward, effort-to-reward ratio, and OC) and the series of measurement times (9:00, 12:00, and 15:00) nor the main effect of the ERI model indicators was significant for daytime salivary cortisol, DHEA, or C/D ratio. Multiple linear regression analyses indicated that none of the ERI model indicators was significantly associated with area under the curve of daytime salivary cortisol, DHEA, or C/D ratio. We found that effort, reward, effort-reward imbalance, and OC had little influence on daytime variation patterns, levels, or amounts of salivary HPA-axis-related hormones. Thus, our hypotheses were not supported.

  5. Efforts - Final technical report on task 4. Physical modelling calidation

    DEFF Research Database (Denmark)

    Andreasen, Jan Lasson; Olsson, David Dam; Christensen, T. W.

    The present report is documentation for the work carried out in Task 4 at DTU Physical modelling-validation on the Brite/Euram project No. BE96-3340, contract No. BRPR-CT97-0398, with the title Enhanced Framework for forging design using reliable three-dimensional simulation (EFFORTS). The report...

  6. An opportunity cost model of subjective effort and task performance

    Science.gov (United States)

    Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus

    2013-01-01

    Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775

  7. Single-Step BLUP with Varying Genotyping Effort in Open-Pollinated Picea glauca

    Directory of Open Access Journals (Sweden)

    Blaise Ratcliffe

    2017-03-01

    Full Text Available Maximization of genetic gain in forest tree breeding programs is contingent on the accuracy of the predicted breeding values and precision of the estimated genetic parameters. We investigated the effect of the combined use of contemporary pedigree information and genomic relatedness estimates on the accuracy of predicted breeding values and precision of estimated genetic parameters, as well as rankings of selection candidates, using single-step genomic evaluation (HBLUP. In this study, two traits with diverse heritabilities [tree height (HT and wood density (WD] were assessed at various levels of family genotyping efforts (0, 25, 50, 75, and 100% from a population of white spruce (Picea glauca consisting of 1694 trees from 214 open-pollinated families, representing 43 provenances in Québec, Canada. The results revealed that HBLUP bivariate analysis is effective in reducing the known bias in heritability estimates of open-pollinated populations, as it exposes hidden relatedness, potential pedigree errors, and inbreeding. The addition of genomic information in the analysis considerably improved the accuracy in breeding value estimates by accounting for both Mendelian sampling and historical coancestry that were not captured by the contemporary pedigree alone. Increasing family genotyping efforts were associated with continuous improvement in model fit, precision of genetic parameters, and breeding value accuracy. Yet, improvements were observed even at minimal genotyping effort, indicating that even modest genotyping effort is effective in improving genetic evaluation. The combined utilization of both pedigree and genomic information may be a cost-effective approach to increase the accuracy of breeding values in forest tree breeding programs where shallow pedigrees and large testing populations are the norm.

  8. Single-Step BLUP with Varying Genotyping Effort in Open-Pollinated Picea glauca.

    Science.gov (United States)

    Ratcliffe, Blaise; El-Dien, Omnia Gamal; Cappa, Eduardo P; Porth, Ilga; Klápště, Jaroslav; Chen, Charles; El-Kassaby, Yousry A

    2017-03-10

    Maximization of genetic gain in forest tree breeding programs is contingent on the accuracy of the predicted breeding values and precision of the estimated genetic parameters. We investigated the effect of the combined use of contemporary pedigree information and genomic relatedness estimates on the accuracy of predicted breeding values and precision of estimated genetic parameters, as well as rankings of selection candidates, using single-step genomic evaluation (HBLUP). In this study, two traits with diverse heritabilities [tree height (HT) and wood density (WD)] were assessed at various levels of family genotyping efforts (0, 25, 50, 75, and 100%) from a population of white spruce ( Picea glauca ) consisting of 1694 trees from 214 open-pollinated families, representing 43 provenances in Québec, Canada. The results revealed that HBLUP bivariate analysis is effective in reducing the known bias in heritability estimates of open-pollinated populations, as it exposes hidden relatedness, potential pedigree errors, and inbreeding. The addition of genomic information in the analysis considerably improved the accuracy in breeding value estimates by accounting for both Mendelian sampling and historical coancestry that were not captured by the contemporary pedigree alone. Increasing family genotyping efforts were associated with continuous improvement in model fit, precision of genetic parameters, and breeding value accuracy. Yet, improvements were observed even at minimal genotyping effort, indicating that even modest genotyping effort is effective in improving genetic evaluation. The combined utilization of both pedigree and genomic information may be a cost-effective approach to increase the accuracy of breeding values in forest tree breeding programs where shallow pedigrees and large testing populations are the norm. Copyright © 2017 Ratcliffe et al.

  9. Comparison of different models for non-invasive FFR estimation

    Science.gov (United States)

    Mirramezani, Mehran; Shadden, Shawn

    2017-11-01

    Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.

  10. Low-rank Kalman filtering for efficient state estimation of subsurface advective contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2012-04-01

    Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.

  11. Development of collision dynamics models to estimate the results of full-scale rail vehicle impact tests : Tufts University Master's Thesis

    Science.gov (United States)

    2000-11-01

    In an effort to study occupant survivability in train collisions, analyses and tests were conducted to understand and improve the crashworthiness of rail vehicles. A collision dynamics model was developed in order to estimate the rigid body motion of...

  12. Estimating surface fluxes using eddy covariance and numerical ogive optimization

    DEFF Research Database (Denmark)

    Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling

    2015-01-01

    Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low-frequency con......Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low...

  13. A coupled modelling effort to study the fate of contaminated sediments downstream of the Coles Hill deposit, Virginia, USA

    Directory of Open Access Journals (Sweden)

    C. F. Castro-Bolinaga

    2015-03-01

    Full Text Available This paper presents the preliminary results of a coupled modelling effort to study the fate of tailings (radioactive waste-by product downstream of the Coles Hill uranium deposit located in Virginia, USA. The implementation of the overall modelling process includes a one-dimensional hydraulic model to qualitatively characterize the sediment transport process under severe flooding conditions downstream of the potential mining site, a two-dimensional ANSYS Fluent model to simulate the release of tailings from a containment cell located partially above the local ground surface into the nearby streams, and a one-dimensional finite-volume sediment transport model to examine the propagation of a tailings sediment pulse in the river network located downstream. The findings of this investigation aim to assist in estimating the potential impacts that tailings would have if they were transported into rivers and reservoirs located downstream of the Coles Hill deposit that serve as municipal drinking water supplies.

  14. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  15. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  16. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  17. Overview 2004 of NASA Stirling-Convertor CFD-Model Development and Regenerator R&D Efforts

    Science.gov (United States)

    Tew, Roy C.; Dyson, Rodger W.; Wilson, Scott D.; Demko, Rikako

    2005-01-01

    This paper reports on accomplishments in 2004 in development of Stirling-convertor CFD model at NASA GRC and via a NASA grant, a Stirling regenerator-research effort being conducted via a NASA grant (a follow-on effort to an earlier DOE contract), and a regenerator-microfabrication contract for development of a "next-generation Stirling regenerator." Cleveland State University is the lead organization for all three grant/contractual efforts, with the University of Minnesota and Gedeor Associates as subcontractors. Also, the Stirling Technology Co. and Sunpower, Inc. are both involved in all three efforts, either as funded or unfunded participants. International Mezzo Technologies of Baton Rouge, LA is the regenerator fabricator for the regenerator-microfabrication contract. Results of the efforts in these three areas are summarized.

  18. Magnitude of Neck-Surface Vibration as an Estimate of Subglottal Pressure during Modulations of Vocal Effort and Intensity in Healthy Speakers

    Science.gov (United States)

    McKenna, Victoria S.; Llico, Andres F.; Mehta, Daryush D.; Perkell, Joseph S.; Stepp, Cara E.

    2017-01-01

    Purpose: This study examined the relationship between the magnitude of neck-surface vibration (NSV[subscript Mag]; transduced with an accelerometer) and intraoral estimates of subglottal pressure (P'[subscript sg]) during variations in vocal effort at 3 intensity levels. Method: Twelve vocally healthy adults produced strings of /p?/ syllables in 3…

  19. Glass Property Data and Models for Estimating High-Level Waste Glass Volume

    Energy Technology Data Exchange (ETDEWEB)

    Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang; Hrma, Pavel R.

    2009-10-05

    This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition models were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.

  20. Glass Property Data and Models for Estimating High-Level Waste Glass Volume

    International Nuclear Information System (INIS)

    Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang; Hrma, Pavel R.

    2009-01-01

    This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition models were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.

  1. A simplified model for the estimation of energy production of PV systems

    International Nuclear Information System (INIS)

    Aste, Niccolò; Del Pero, Claudio; Leonforte, Fabrizio; Manfren, Massimiliano

    2013-01-01

    The potential of solar energy is far higher than any other renewable source, although several limits exist. In detail the fundamental factors that must be analyzed by investors and policy makers are the cost-effectiveness and the production of PV power plants, respectively, for the decision of investment schemes and energy policy strategies. Tools suitable to be used even by non-specialists, are therefore becoming increasingly important. Many research and development effort have been devoted to this goal in recent years. In this study, a simplified model for PV annual production estimation that can provide results with a level of accuracy comparable with the more sophisticated simulation tools from which it derives is fundamental data. The main advantage of the presented model is that it can be used by virtually anyone, without requiring a specific field expertise. The inherent limits of the model are related to its empirical base, but the methodology presented can be effectively reproduced in the future with a different spectrum of data in order to assess, for example, the effect of technological evolution on the overall performance of PV power generation or establishing performance benchmarks for a much larger variety kinds of PV plants and technologies. - Highlights: • We have analyzed the main methods for estimating the electricity production of photovoltaic systems. • We simulated the same system with two different software in different European locations and estimated the electric production. • We have studied the main losses of a plant PV. • We provide a simplified model to estimate the electrical production of any PV system well designed. • We validated the data obtained by the proposed model with experimental data from three PV systems

  2. ERP services effort estimation strategies based on early requirements

    NARCIS (Netherlands)

    Erasmus, I.P.; Daneva, Maia; Kalenborg, Axel; Trapp, Marcus

    2015-01-01

    ERP clients and vendors necessarily estimate their project interventions at a very early stage, before the full requirements to an ERP solution are known and often before a contract is finalized between a vendor/ consulting company and a client. ERP project estimation at the stage of early

  3. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  4. Maintenance personnel performance simulation (MAPPS) model: overview and evaluation efforts

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.; Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Ryan, T.G.

    1984-01-01

    The development of the MAPPS model has been completed and the model is currently undergoing evaluation. These efforts are addressing a number of identified issues concerning practicality, acceptability, usefulness, and validity. Preliminary analysis of the evaluation data that has been collected indicates that MAPPS will provide comprehensive and reliable data for PRA purposes and for a number of other applications. The MAPPS computer simulation model provides the user with a sophisticated tool for gaining insights into tasks performed by NPP maintenance personnel. Its wide variety of input parameters and output data makes it extremely flexible for application to a number of diverse applications. With the demonstration of favorable model evaluation results, the MAPPS model will represent a valuable source of NPP maintainer reliability data and provide PRA studies with a source of data on maintainers that has previously not existed

  5. Evidence towards improved estimation of respiratory muscle effort from diaphragm mechanomyographic signals with cardiac vibration interference using sample entropy with fixed tolerance values.

    Directory of Open Access Journals (Sweden)

    Leonardo Sarlabous

    Full Text Available The analysis of amplitude parameters of the diaphragm mechanomyographic (MMGdi signal is a non-invasive technique to assess respiratory muscle effort and to detect and quantify the severity of respiratory muscle weakness. The amplitude of the MMGdi signal is usually evaluated using the average rectified value or the root mean square of the signal. However, these estimations are greatly affected by the presence of cardiac vibration or mechanocardiographic (MCG noise. In this study, we present a method for improving the estimation of the respiratory muscle effort from MMGdi signals that is robust to the presence of MCG. This method is based on the calculation of the sample entropy using fixed tolerance values (fSampEn, that is, with tolerance values that are not normalized by the local standard deviation of the window analyzed. The behavior of the fSampEn parameter was tested in synthesized mechanomyographic signals, with different ratios between the amplitude of the MCG and clean mechanomyographic components. As an example of application of this technique, the use of fSampEn was explored also in recorded MMGdi signals, with different inspiratory loads. The results with both synthetic and recorded signals indicate that the entropy parameter is less affected by the MCG noise, especially at low signal-to-noise ratios. Therefore, we believe that the proposed fSampEn parameter could improve estimates of respiratory muscle effort from MMGdi signals with the presence of MCG interference.

  6. Estimation of mental effort in learning visual search by measuring pupil response.

    Directory of Open Access Journals (Sweden)

    Tatsuto Takeuchi

    Full Text Available Perceptual learning refers to the improvement of perceptual sensitivity and performance with training. In this study, we examined whether learning is accompanied by a release from mental effort on the task, leading to automatization of the learned task. For this purpose, we had subjects conduct a visual search for a target, defined by a combination of orientation and spatial frequency, while we monitored their pupil size. It is well known that pupil size reflects the strength of mental effort invested in a task. We found that pupil size increased rapidly as the learning proceeded in the early phase of training and decreased at the later phase to a level half of its maximum value. This result does not support the simple automatization hypothesis. Instead, it suggests that the mental effort and behavioral performance reflect different aspects of perceptual learning. Further, mental effort would be continued to be invested to maintain good performance at a later stage of training.

  7. A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data

    Science.gov (United States)

    Barnes, J. R.

    1993-01-01

    Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.

  8. The Effect of the Demand Control and Effort Reward Imbalance Models on the Academic Burnout of Korean Adolescents

    Science.gov (United States)

    Lee, Jayoung; Puig, Ana; Lee, Sang Min

    2012-01-01

    The purpose of this study was to examine the effects of the Demand Control Model (DCM) and the Effort Reward Imbalance Model (ERIM) on academic burnout for Korean students. Specifically, this study identified the effects of the predictor variables based on DCM and ERIM (i.e., demand, control, effort, reward, Demand Control Ratio, Effort Reward…

  9. Exploiting magnetic resonance angiography imaging improves model estimation of BOLD signal.

    Directory of Open Access Journals (Sweden)

    Zhenghui Hu

    Full Text Available The change of BOLD signal relies heavily upon the resting blood volume fraction ([Formula: see text] associated with regional vasculature. However, existing hemodynamic data assimilation studies pretermit such concern. They simply assign the value in a physiologically plausible range to get over ill-conditioning of the assimilation problem and fail to explore actual [Formula: see text]. Such performance might lead to unreliable model estimation. In this work, we present the first exploration of the influence of [Formula: see text] on fMRI data assimilation, where actual [Formula: see text] within a given cortical area was calibrated by an MR angiography experiment and then was augmented into the assimilation scheme. We have investigated the impact of [Formula: see text] on single-region data assimilation and multi-region data assimilation (dynamic cause modeling, DCM in a classical flashing checkerboard experiment. Results show that the employment of an assumed [Formula: see text] in fMRI data assimilation is only suitable for fMRI signal reconstruction and activation detection grounded on this signal, and not suitable for estimation of unobserved states and effective connectivity study. We thereby argue that introducing physically realistic [Formula: see text] in the assimilation process may provide more reliable estimation of physiological information, which contributes to a better understanding of the underlying hemodynamic processes. Such an effort is valuable and should be well appreciated.

  10. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  11. Determining the Uncertainties in Prescribed Burn Emissions Through Comparison of Satellite Estimates to Ground-based Estimates and Air Quality Model Evaluations in Southeastern US

    Science.gov (United States)

    Odman, M. T.; Hu, Y.; Russell, A. G.

    2016-12-01

    Prescribed burning is practiced throughout the US, and most widely in the Southeast, for the purpose of maintaining and improving the ecosystem, and reducing the wildfire risk. However, prescribed burn emissions contribute significantly to the of trace gas and particulate matter loads in the atmosphere. In places where air quality is already stressed by other anthropogenic emissions, prescribed burns can lead to major health and environmental problems. Air quality modeling efforts are under way to assess the impacts of prescribed burn emissions. Operational forecasts of the impacts are also emerging for use in dynamic management of air quality as well as the burns. Unfortunately, large uncertainties exist in the process of estimating prescribed burn emissions and these uncertainties limit the accuracy of the burn impact predictions. Prescribed burn emissions are estimated by using either ground-based information or satellite observations. When there is sufficient local information about the burn area, the types of fuels, their consumption amounts, and the progression of the fire, ground-based estimates are more accurate. In the absence of such information satellites remain as the only reliable source for emission estimation. To determine the level of uncertainty in prescribed burn emissions, we compared estimates derived from a burn permit database and other ground-based information to the estimates by the Biomass Burning Emissions Product derived from a constellation of NOAA and NASA satellites. Using these emissions estimates we conducted simulations with the Community Multiscale Air Quality (CMAQ) model and predicted trace gas and particulate matter concentrations throughout the Southeast for two consecutive burn seasons (2015 and 2016). In this presentation, we will compare model predicted concentrations to measurements at monitoring stations and evaluate if the differences are commensurate with our emission uncertainty estimates. We will also investigate if

  12. Relationship between anaerobic capacity estimated using a single effort and 30-s tethered running outcomes.

    Directory of Open Access Journals (Sweden)

    Alessandro Moura Zagatto

    Full Text Available The purpose of the current study was to investigate the relationship between alternative anaerobic capacity method (MAODALT and a 30-s all-out tethered running test. Fourteen male recreational endurance runners underwent a graded exercise test, a supramaximal exhaustive effort and a 30-s all-out test on different days, interspaced by 48h. After verification of data normality (Shapiro-Wilk test, the Pearson's correlation test was used to verify the association between the anaerobic estimates from the MAODALT and the 30-s all-out tethered running outputs. Absolute MAODALT was correlated with mean power (r = 0.58; P = 0.03, total work (r = 0.57; P = 0.03, and mean force (r = 0.79; P = 0.001. In addition, energy from the glycolytic pathway (E[La-] was correlated with mean power (r = 0.58; P = 0.03. Significant correlations were also found at each 5s interval between absolute MAODALT and force values (r between 0.75 and 0.84, and between force values and E[La-] (r between 0.73 to 0.80. In conclusion, the associations between absolute MAODALT and the mechanical outputs from the 30-s all-out tethered running test evidenced the importance of the anaerobic capacity for maintaining force during the course of time in short efforts.

  13. Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort

    Directory of Open Access Journals (Sweden)

    Eliana Vassena

    2017-06-01

    Full Text Available In the last two decades the anterior cingulate cortex (ACC has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.

  14. Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies

    Science.gov (United States)

    Wickens, Christopher; Sebok, Angelia; Keller, John; Peters, Steve; Small, Ronald; Hutchins, Shaun; Algarin, Liana; Gore, Brian Francis; Hooey, Becky Lee; Foyle, David C.

    2013-01-01

    NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA

  15. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  16. INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS

    Science.gov (United States)

    Hong, Sungjoon; Oguchi, Takashi

    In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.

  17. The estimation of future surface water bodies at Olkiluoto area based on statistical terrain and land uplift models

    Energy Technology Data Exchange (ETDEWEB)

    Pohjola, J.; Turunen, J.; Lipping, T. [Tampere Univ. of Technology (Finland); Ikonen, A.

    2014-03-15

    In this working report the modelling effort of future landscape development and surface water body formation at the modelling area in the vicinity of the Olkiluoto Island is presented. Estimation of the features of future surface water bodies is based on probabilistic terrain and land uplift models presented in previous working reports. The estimation is done using a GIS-based toolbox called UNTAMO. The future surface water bodies are estimated in 10 000 years' time span with 1000 years' intervals for the safety assessment of disposal of spent nuclear fuel at the Olkiluoto site. In the report a brief overview on the techniques used for probabilistic terrain modelling, land uplift modelling and hydrological modelling are presented first. The latter part of the report describes the results of the modelling effort. The main features of the future landscape - the four lakes forming in the vicinity of the Olkiluoto Island - are identified and the probabilistic model of the shoreline displacement is presented. The area and volume of the four lakes is modelled in a probabilistic manner. All the simulations have been performed for three scenarios two of which are based on 10 realizations of the probabilistic digital terrain model (DTM) and 10 realizations of the probabilistic land uplift model. These two scenarios differ from each other by the eustatic curve used in the land uplift model. The third scenario employs 50 realizations of the probabilistic DTM while a deterministic land uplift model, derived solely from the current land uplift rate, is used. The results indicate that the two scenarios based on the probabilistic land uplift model behave in a similar manner while the third model overestimates past and future land uplift rates. The main features of the landscape are nevertheless similar also for the third scenario. Prediction results for the volumes of the future lakes indicate that a couple of highly probably lake formation scenarios can be identified

  18. The estimation of future surface water bodies at Olkiluoto area based on statistical terrain and land uplift models

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.; Ikonen, A.

    2014-03-01

    In this working report the modelling effort of future landscape development and surface water body formation at the modelling area in the vicinity of the Olkiluoto Island is presented. Estimation of the features of future surface water bodies is based on probabilistic terrain and land uplift models presented in previous working reports. The estimation is done using a GIS-based toolbox called UNTAMO. The future surface water bodies are estimated in 10 000 years' time span with 1000 years' intervals for the safety assessment of disposal of spent nuclear fuel at the Olkiluoto site. In the report a brief overview on the techniques used for probabilistic terrain modelling, land uplift modelling and hydrological modelling are presented first. The latter part of the report describes the results of the modelling effort. The main features of the future landscape - the four lakes forming in the vicinity of the Olkiluoto Island - are identified and the probabilistic model of the shoreline displacement is presented. The area and volume of the four lakes is modelled in a probabilistic manner. All the simulations have been performed for three scenarios two of which are based on 10 realizations of the probabilistic digital terrain model (DTM) and 10 realizations of the probabilistic land uplift model. These two scenarios differ from each other by the eustatic curve used in the land uplift model. The third scenario employs 50 realizations of the probabilistic DTM while a deterministic land uplift model, derived solely from the current land uplift rate, is used. The results indicate that the two scenarios based on the probabilistic land uplift model behave in a similar manner while the third model overestimates past and future land uplift rates. The main features of the landscape are nevertheless similar also for the third scenario. Prediction results for the volumes of the future lakes indicate that a couple of highly probably lake formation scenarios can be identified with other

  19. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  20. Fundamental Drop Dynamics and Mass Transfer Experiments to Support Solvent Extraction Modeling Efforts

    International Nuclear Information System (INIS)

    Christensen, Kristi; Rutledge, Veronica; Garn, Troy

    2011-01-01

    In support of the Nuclear Energy Advanced Modeling Simulation Safeguards and Separations (NEAMS SafeSep) program, the Idaho National Laboratory (INL) worked in collaboration with Los Alamos National Laboratory (LANL) to further a modeling effort designed to predict mass transfer behavior for selected metal species between individual dispersed drops and a continuous phase in a two phase liquid-liquid extraction (LLE) system. The purpose of the model is to understand the fundamental processes of mass transfer that occur at the drop interface. This fundamental understanding can be extended to support modeling of larger LLE equipment such as mixer settlers, pulse columns, and centrifugal contactors. The work performed at the INL involved gathering the necessary experimental data to support the modeling effort. A custom experimental apparatus was designed and built for performing drop contact experiments to measure mass transfer coefficients as a function of contact time. A high speed digital camera was used in conjunction with the apparatus to measure size, shape, and velocity of the drops. In addition to drop data, the physical properties of the experimental fluids were measured to be used as input data for the model. Physical properties measurements included density, viscosity, surface tension and interfacial tension. Additionally, self diffusion coefficients for the selected metal species in each experimental solution were measured, and the distribution coefficient for the metal partitioning between phases was determined. At the completion of this work, the INL has determined the mass transfer coefficient and a velocity profile for drops rising by buoyancy through a continuous medium under a specific set of experimental conditions. Additionally, a complete set of experimentally determined fluid properties has been obtained. All data will be provided to LANL to support the modeling effort.

  1. Investigation of Psychological Health and Migraine Headaches Among Personnel According to Effort-Reward Imbalance Model

    Directory of Open Access Journals (Sweden)

    Z. Darami

    2012-05-01

    Full Text Available Background and aims: The relationship between physical-mental health and Migraine headaches and stress, especially job stress, is known. Many factors can construct job stress in work settings. The factor that has gained much attention recently is inequality (imbalance of employees’ effort versus the reward they gain. The aim of the current attempt was to investigate the validity of effort-reward imbalance model and indicate the relation of this model with migraine headaches and psychological well-being among subjects in balance and imbalance groups. Methods: Participants were 180 personnel of Oil distribution company located in Isfahan city, and instruments used were General health questionnaire (Goldberg & Hilier, Social Re-adjustment Rating Scale (Holmes & Rahe, Ahvaz Migraine Questionnaire (Najariyan and Effort-reward imbalance scale (Van Vegchel & et al.   Results: The result of exploratory and confirmatory factor analysis for investigating the Construct validity of the effort-reward imbalance model showed that in both analyses, the two factor model was confirmed. Moreover, findings indicate that balance group was in better psychological (p<0/01 and physical (migraine (p<0/05 status comparing to the imbalance group. These findings indicate the significance of justice to present appropriate reward relative to personnel performance on their health.   Conclusion: Implication of these findings can improve Iranian industrial personnel health from both physical and psychological aspects.  

  2. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  3. Incentive Design and Mis-Allocated Effort

    OpenAIRE

    Schnedler, Wendelin

    2013-01-01

    Incentives often distort behavior: they induce agents to exert effort but this effort is not employed optimally. This paper proposes a theory of incentive design allowing for such distorted behavior. At the heart of the theory is a trade-off between getting the agent to exert effort and ensuring that this effort is used well. The theory covers various moral-hazard models, ranging from traditional single-task to multi-task models. It also provides -for the first time- a formalization and proof...

  4. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  5. Estimating local atmosphere-surface fluxes using eddy covariance and numerical Ogive optimization

    DEFF Research Database (Denmark)

    Sievers, Jakob; Papakyriakou, Tim; Larsen, Søren

    2014-01-01

    Estimating representative surface-fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modeling efforts, low-frequency cont......Estimating representative surface-fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modeling efforts, low...

  6. Two models at work : A study of interactions and specificity in relation to the Demand-Control Model and the Effort-Reward Imbalance Model

    NARCIS (Netherlands)

    Vegchel, N.

    2005-01-01

    To investigate the relation between work and employee health, several work stress models, e.g., the Demand-Control (DC) Model and the Effort-Reward Imbalance (ERI) Model, have been developed. Although these models focus on job demands and job resources, relatively little attention has been devoted

  7. Money Laundering and International Efforts to Fight It

    OpenAIRE

    David Scott

    1996-01-01

    According to one estimate, US$300 billion to US$500 billion in proceeds from serious crime is laundered each year. Left unchecked, money laundering could criminalize the financial system and undermine development efforts in emerging markets. The author reviews efforts by international bodies to fight it.

  8. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Science.gov (United States)

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  9. Early efforts in modeling the incubation period of infectious diseases with an acute course of illness

    Directory of Open Access Journals (Sweden)

    Nishiura Hiroshi

    2007-05-01

    Full Text Available Abstract The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period. After the suggestion that the incubation period follows lognormal distribution, Japanese epidemiologists extended this assumption to estimates of the time of exposure during a point source outbreak. Although the reason why the incubation period of acute infectious diseases tends to reveal a right-skewed distribution has been explored several times, the validity of the lognormal assumption is yet to be fully clarified. At present, various different distributions are assumed, and the lack of validity in assuming lognormal distribution is particularly apparent in the case of slowly progressing diseases. The present paper indicates that (1 analysis using well-defined short periods of exposure with appropriate statistical methods is critical when the exact time of exposure is unknown, and (2 when assuming a specific distribution for the incubation period, comparisons using different distributions are needed in addition to estimations using different datasets, analyses of the determinants of incubation period, and an understanding of the underlying disease mechanisms.

  10. Early efforts in modeling the incubation period of infectious diseases with an acute course of illness.

    Science.gov (United States)

    Nishiura, Hiroshi

    2007-05-11

    The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period. After the suggestion that the incubation period follows lognormal distribution, Japanese epidemiologists extended this assumption to estimates of the time of exposure during a point source outbreak. Although the reason why the incubation period of acute infectious diseases tends to reveal a right-skewed distribution has been explored several times, the validity of the lognormal assumption is yet to be fully clarified. At present, various different distributions are assumed, and the lack of validity in assuming lognormal distribution is particularly apparent in the case of slowly progressing diseases. The present paper indicates that (1) analysis using well-defined short periods of exposure with appropriate statistical methods is critical when the exact time of exposure is unknown, and (2) when assuming a specific distribution for the incubation period, comparisons using different distributions are needed in addition to estimations using different datasets, analyses of the determinants of incubation period, and an understanding of the underlying disease mechanisms.

  11. Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling

    Science.gov (United States)

    Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.

    2015-01-01

    Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless

  12. Assimilation of Remotely Sensed Soil Moisture Profiles into a Crop Modeling Framework for Reliable Yield Estimations

    Science.gov (United States)

    Mishra, V.; Cruise, J.; Mecikalski, J. R.

    2017-12-01

    Much effort has been expended recently on the assimilation of remotely sensed soil moisture into operational land surface models (LSM). These efforts have normally been focused on the use of data derived from the microwave bands and results have often shown that improvements to model simulations have been limited due to the fact that microwave signals only penetrate the top 2-5 cm of the soil surface. It is possible that model simulations could be further improved through the introduction of geostationary satellite thermal infrared (TIR) based root zone soil moisture in addition to the microwave deduced surface estimates. In this study, root zone soil moisture estimates from the TIR based Atmospheric Land Exchange Inverse (ALEXI) model were merged with NASA Soil Moisture Active Passive (SMAP) based surface estimates through the application of informational entropy. Entropy can be used to characterize the movement of moisture within the vadose zone and accounts for both advection and diffusion processes. The Principle of Maximum Entropy (POME) can be used to derive complete soil moisture profiles and, fortuitously, only requires a surface boundary condition as well as the overall mean moisture content of the soil column. A lower boundary can be considered a soil parameter or obtained from the LSM itself. In this study, SMAP provided the surface boundary while ALEXI supplied the mean and the entropy integral was used to tie the two together and produce the vertical profile. However, prior to the merging, the coarse resolution (9 km) SMAP data were downscaled to the finer resolution (4.7 km) ALEXI grid. The disaggregation scheme followed the Soil Evaporative Efficiency approach and again, all necessary inputs were available from the TIR model. The profiles were then assimilated into a standard agricultural crop model (Decision Support System for Agrotechnology, DSSAT) via the ensemble Kalman Filter. The study was conducted over the Southeastern United States for the

  13. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  14. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  15. Competition for marine space: modelling the Baltic Sea fisheries and effort displacement under spatial restrictions

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Eigaard, Ole Ritzau

    2015-01-01

    DISPLACE model) to combine stochastic variations in spatial fishing activities with harvested resource dynamics in scenario projections. The assessment computes economic and stock status indicators by modelling the activity of Danish, Swedish, and German vessels (.12 m) in the international western Baltic...... Sea commercial fishery, together with the underlying size-based distribution dynamics of the main fishery resources of sprat, herring, and cod. The outcomes of alternative scenarios for spatial effort displacement are exemplified by evaluating the fishers’s abilities to adapt to spatial plans under...... various constraints. Interlinked spatial, technical, and biological dynamics of vessels and stocks in the scenarios result in stable profits, which compensate for the additional costs from effort displacement and release pressure on the fish stocks. The effort is further redirected away from sensitive...

  16. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  17. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  18. State of Charge and State of Health Estimation of AGM VRLA Batteries by Employing a Dual Extended Kalman Filter and an ARX Model for Online Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Ngoc-Tham Tran

    2017-01-01

    Full Text Available State of charge (SOC and state of health (SOH are key issues for the application of batteries, especially the absorbent glass mat valve regulated lead-acid (AGM VRLA type batteries used in the idle stop start systems (ISSs that are popularly integrated into conventional engine-based vehicles. This is due to the fact that SOC and SOH estimation accuracy is crucial for optimizing battery energy utilization, ensuring safety and extending battery life cycles. The dual extended Kalman filter (DEKF, which provides an elegant and powerful solution, is widely applied in SOC and SOH estimation based on a battery parameter model. However, the battery parameters are strongly dependent on operation conditions such as the SOC, current rate and temperature. In addition, battery parameters change significantly over the life cycle of a battery. As a result, many experimental pretests investigating the effects of the internal and external conditions of a battery on its parameters are required, since the accuracy of state estimation depends on the quality of the information regarding battery parameter changes. In this paper, a novel method for SOC and SOH estimation that combines a DEKF algorithm, which considers hysteresis and diffusion effects, and an auto regressive exogenous (ARX model for online parameters estimation is proposed. The DEKF provides precise information concerning the battery open circuit voltage (OCV to the ARX model. Meanwhile, the ARX model continues monitoring parameter variations and supplies information on them to the DEKF. In this way, the estimation accuracy can be maintained despite the changing parameters of a battery. Moreover, online parameter estimation from the ARX model can save the time and effort used for parameter pretests. The validation of the proposed algorithm is given by simulation and experimental results.

  19. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    International Nuclear Information System (INIS)

    Koch, J.; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs

  20. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    Energy Technology Data Exchange (ETDEWEB)

    Koch, J. [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs.

  1. Use of Annual Phosphorus Loss Estimator (APLE) Model to Evaluate a Phosphorus Index.

    Science.gov (United States)

    Fiorellino, Nicole M; McGrath, Joshua M; Vadas, Peter A; Bolster, Carl H; Coale, Frank J

    2017-11-01

    The Phosphorus (P) Index was developed to provide a relative ranking of agricultural fields according to their potential for P loss to surface water. Recent efforts have focused on updating and evaluating P Indices against measured or modeled P loss data to ensure agreement in magnitude and direction. Following a recently published method, we modified the Maryland P Site Index (MD-PSI) from a multiplicative to a component index structure and evaluated the MD-PSI outputs against P loss data estimated by the Annual P Loss Estimator (APLE) model, a validated, field-scale, annual P loss model. We created a theoretical dataset of fields to represent Maryland conditions and scenarios and created an empirical dataset of soil samples and management characteristics from across the state. Through the evaluation process, we modified a number of variables within the MD-PSI and calculated weighting coefficients for each P loss component. We have demonstrated that our methods can be used to modify a P Index and increase correlation between P Index output and modeled P loss data. The methods presented here can be easily applied in other states where there is motivation to update an existing P Index. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  2. EFFORTS AND INEQUALITY OF OPPORTUNITIES IN THE BOLIVIAN LABOR MARKET

    Directory of Open Access Journals (Sweden)

    Fátima Rico Encinas

    2016-01-01

    Full Text Available The equitable distribution of income, along with human development indices, is among the factors that differentiate developed from developing countries. In this paper, efforts and other variables related to the circumstances of individuals were quantified and analyzed together with traditional determinants in order to explain inequality in the working population of Bolivia. We estimated econometric models by merging the extended Mincer equation with John Roemer’s theory of Inequality of Opportunity. We find that efforts are important determinants of the levels of wage inequality in the country as well as regional development, labor informality, gender and ethnicity. In this sense, the paper separates the part of wage inequality that may be attributed to situations that are beyond the control of individuals and that can be attributed to conscious decisions. Micro simulations determined that it would be possible to reduce inequality by as much as 21% if it gives people the chance to make similar efforts to improve their wages.

  3. Daily Discharge Estimation in Talar River Using Lazy Learning Model

    Directory of Open Access Journals (Sweden)

    Zahra Abdollahi

    2017-03-01

    Full Text Available Introduction: River discharge as one of the most important hydrology factors has a vital role in physical, ecological, social and economic processes. So, accurate and reliable prediction and estimation of river discharge have been widely considered by many researchers in different fields such as surface water management, design of hydraulic structures, flood control and ecological studies in spetialand temporal scale. Therefore, in last decades different techniques for short-term and long-term estimation of hourly, daily, monthly and annual discharge have been developed for many years. However, short-term estimation models are less sophisticated and more accurate.Various global and local algorithms have been widely used to estimate hydrologic variables. The current study effort to use Lazy Learning approach to evaluate the adequacy of input data in order to follow the variation of discharge and also simulate next-day discharge in Talar River in KasilianBasinwhere is located in north of Iran with an area of 66.75 km2. Lazy learning is a local linear modelling approach in which generalization beyond the training data is delayed until a query is made to the system, as opposed to in eager learning, where the system tries to generalize the training data before receiving queries Materials and Methods: The current study was conducted in Kasilian Basin, where is located in north of Iran with an area of 66.75 km2. The main river of this basin joins to Talar River near Valicbon village and then exit from the watershed. Hydrometric station located near Valicbon village is equipped with Parshall flume and Limnogragh which can record river discharge of about 20 cubic meters per second.In this study, daily data of discharge recorded in Valicbon station related to 2002 to 2012 was used to estimate the discharge of 19 September 2012. The mean annual discharge of considered river was also calculated by using available data about 0.441 cubic meters per second. To

  4. Shark fishing effort and catch of the ragged-tooth shark Carcharias ...

    African Journals Online (AJOL)

    An integrated telephone and on-site questionnaire survey was used to estimate total shark fishing effort and specific catch of the ragged-tooth shark Carcharias taurus by coastal club-affiliated shore-anglers, primarily along the east coast of South Africa. Mean total shark fishing effort was estimated to be 37 820 fisherdays ...

  5. State-of-charge inconsistency estimation of lithium-ion battery pack using mean-difference model and extended Kalman filter

    Science.gov (United States)

    Zheng, Yuejiu; Gao, Wenkai; Ouyang, Minggao; Lu, Languang; Zhou, Long; Han, Xuebing

    2018-04-01

    State-of-charge (SOC) inconsistency impacts the power, durability and safety of the battery pack. Therefore, it is necessary to measure the SOC inconsistency of the battery pack with good accuracy. We explore a novel method for modeling and estimating the SOC inconsistency of lithium-ion (Li-ion) battery pack with low computation effort. In this method, a second-order RC model is selected as the cell mean model (CMM) to represent the overall performance of the battery pack. A hypothetical Rint model is employed as the cell difference model (CDM) to evaluate the SOC difference. The parameters of mean-difference model (MDM) are identified with particle swarm optimization (PSO). Subsequently, the mean SOC and the cell SOC differences are estimated by using extended Kalman filter (EKF). Finally, we conduct an experiment on a small Li-ion battery pack with twelve cells connected in series. The results show that the evaluated SOC difference is capable of tracking the changing of actual value after a quick convergence.

  6. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  7. Flood extent and water level estimation from SAR using data-model integration

    Science.gov (United States)

    Ajadi, O. A.; Meyer, F. J.

    2017-12-01

    Synthetic Aperture Radar (SAR) images have long been recognized as a valuable data source for flood mapping. Compared to other sources, SAR's weather and illumination independence and large area coverage at high spatial resolution supports reliable, frequent, and detailed observations of developing flood events. Accordingly, SAR has the potential to greatly aid in the near real-time monitoring of natural hazards, such as flood detection, if combined with automated image processing. This research works towards increasing the reliability and temporal sampling of SAR-derived flood hazard information by integrating information from multiple SAR sensors and SAR modalities (images and Interferometric SAR (InSAR) coherence) and by combining SAR-derived change detection information with hydrologic and hydraulic flood forecast models. First, the combination of multi-temporal SAR intensity images and coherence information for generating flood extent maps is introduced. The application of least-squares estimation integrates flood information from multiple SAR sensors, thus increasing the temporal sampling. SAR-based flood extent information will be combined with a Digital Elevation Model (DEM) to reduce false alarms and to estimate water depth and flood volume. The SAR-based flood extent map is assimilated into the Hydrologic Engineering Center River Analysis System (Hec-RAS) model to aid in hydraulic model calibration. The developed technology is improving the accuracy of flood information by exploiting information from data and models. It also provides enhanced flood information to decision-makers supporting the response to flood extent and improving emergency relief efforts.

  8. Robust estimation for ordinary differential equation models.

    Science.gov (United States)

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  9. Catch-per-unit-effort: which estimator is best? Captura por unidade de esforço: qual estimador é melhor?

    Directory of Open Access Journals (Sweden)

    M. Petrere Jr.

    2010-08-01

    Full Text Available In this paper we examine the accuracy and precision of three indices of catch-per-unit-effort (CPUE. We carried out simulations, generating catch data according to six probability distributions (normal, Poisson, lognormal, gamma, delta and negative binomial, three variance structures (constant, proportional to effort and proportional to the squared effort and their magnitudes (tail weight. The Jackknife approach of the index is recommended, whenever catch is proportional to effort or even under small deviations from proportionality assumption, when a ratio estimator is to be applied and little is known about the underlying behaviour of variables, as is the case for most fishery studies.Neste trabalho, examinamos a acurácia e precisão de três índices de captura por unidade de esforço (CPUE. Foram feitas simulações, nas quais foram gerados dados de captura de acordo com seis distribuições de probabilidade (normal, Poisson, lognormal, gama, delta e binomial negativa, três estruturas de variância (constante, proporcional ao esforço e proporcional ao quadrado do esforço, e magnitudes (tail weight. É recomendado o uso do método Jackknife para os índices, sempre que a captura for proporcional ao esforço ou até em casos de pequenos desvios do pressuposto de proporcionalidade, quando se deseja utilizar um estimador de razão e pouco é conhecido sobre o real comportamento das variáveis, como é o caso da maioria dos estudos de pesca.

  10. The association of perceived organizational justice and organizational expectations with nurses' efforts.

    Science.gov (United States)

    Motlagh, Farhad Shafiepour; Yarmohammadian, Mohammad Hossein; Yaghoubi, Maryam

    2012-03-01

    One important factor in growth, progress, and increase in work efficiency of employees of any enterprise is to make considerable effort. Supreme leader of the Islamic Republic of Iran also addressed the issue of need for more efforts. The goal of this study was to determine the association of perceived organizational justice and organizational expectations with efforts of nurses to provide a suitable model. The current study was a descriptive study. The study group consists of all nurses who worked in hospitals of Isfahan. Due to some limitations all nurses of the special unit, surgery wards and operating room were questioned. The data collection tools were the Organizational Justice Questionnaire, organizational expectations questionnaire, and double effort questionnaire. Content validity of the mentioned questionnaires was confirmed after considering the experts' comments. The reliability of these questionnaires, using the Cronbach's alpha, were 0.79, 0.83 and 0.92, respectively. The Pearson correlation and the structural equation model were used for the analysis of data. There was a significant correlation between the perceived organizational justice and the double effort of nurses during the surgery of patients. Correlation of the expectation from job, usefulness of job, and its attractiveness with double effort of nurses before the surgery was also statistically significant. Moreover, it was shown that the root of the mean square error of estimation (RMSEA) was 0.087, the fitted goodness index (GFI) was 0.953, the value of chi-square was 268.5, and the model was statistically significant (p Justice is an essential need for human life and its importance in organizations and social life of individuals is evident.

  11. Model-Based Estimation of Ankle Joint Stiffness.

    Science.gov (United States)

    Misgeld, Berno J E; Zhang, Tony; Lüken, Markus J; Leonhardt, Steffen

    2017-03-29

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model's inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  12. Routine inspection effort required for verification of a nuclear material production cutoff convention

    International Nuclear Information System (INIS)

    Fishbone, L.G.; Sanborn, J.

    1994-12-01

    Preliminary estimates of the inspection effort to verify a Nuclear Material Cutoff Convention are presented. The estimates are based on (1) a database of about 650 facilities a total of eight states, i.e., the five nuclear-weapons states and three ''threshold'' states; (2) typical figures for inspection requirements for specific facility types derived from IAEA experience, where applicable; and (3) alternative estimates of inspection effort in cutoff options where full IAEA safeguards are not stipulated. Considerable uncertainty must be attached to the effort estimates. About 50--60% of the effort for each option is attributable to 16 large-scale reprocessing plants assumed to be in operation in the eight states; it is likely that some of these will be shut down by the time the convention enters into force. Another important question involving about one third of the overall effort is whether Euratom inspections in France and the U.K. could obviate the need for full-scale IAEA inspections at these facilities. Finally, the database does not yet contain many small-scale and military-related facilities. The results are therefore not presented as predictions but as the consequences of alternative assumptions. Despite the preliminary nature of the estimates, it is clear that a broad application of NPT-like safeguards to the eight states would require dramatic increases in the IAEA's safeguards budget. It is also clear that the major component of the increased inspection effort would occur at large reprocessing plants (and associated plutonium facilities). Therefore, significantly bounding the increased effort requires a limitation on the inspection effort in these facility types

  13. Obligatory Effort [Hishtadlut] as an Explanatory Model: A Critique of Reproductive Choice and Control.

    Science.gov (United States)

    Teman, Elly; Ivry, Tsipy; Goren, Heela

    2016-06-01

    Studies on reproductive technologies often examine women's reproductive lives in terms of choice and control. Drawing on 48 accounts of procreative experiences of religiously devout Jewish women in Israel and the US, we examine their attitudes, understandings and experiences of pregnancy, reproductive technologies and prenatal testing. We suggest that the concept of hishtadlut-"obligatory effort"-works as an explanatory model that organizes Haredi women's reproductive careers and their negotiations of reproductive technologies. As an elastic category with negotiable and dynamic boundaries, hishtadlut gives ultra-orthodox Jewish women room for effort without the assumption of control; it allows them to exercise discretion in relation to medical issues without framing their efforts in terms of individual choice. Haredi women hold themselves responsible for making their obligatory effort and not for pregnancy outcomes. We suggest that an alternative paradigm to autonomous choice and control emerges from cosmological orders where reproductive duties constitute "obligatory choices."

  14. Status of Los Alamos efforts related to Hiroshima and Nagasaki dose estimates

    International Nuclear Information System (INIS)

    Whalen, P.P.

    1981-09-01

    The Los Alamos efforts related to resolution of the Hiroshima, Nagasaki doses are described as follows: (1) Using recently located replicas of the Hiroshima bomb, measurements will be made which will define the upper limit of the Hiroshima yield. (2) Two-dimensional calculations of the neutron and gamma-ray outputs of the Hiroshima and Nagasaki weapons are in progress. Neutron and gamma-ray leakage spectra measurements will be made. Similar measurements on the Mark 9 weapon and on the Ichiban assembly are proposed. These measurements will provide a check for present day cross sections and calculations. (3) Calculations of several air transport experiments are in progress. A comparison of calculated results with experimental results is given. (4) The neutron and gamma-ray output spectra of several devices tested in the atmosphere at the Nevada Test Site are being calculated. The results of these calculations will allow models of the debris cloud contribution to the total dose to be tested

  15. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  16. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  17. Effort dynamics in a fisheries bioeconomic model: A vessel level approach through Game Theory

    Directory of Open Access Journals (Sweden)

    Gorka Merino

    2007-09-01

    Full Text Available Red shrimp, Aristeus antennatus (Risso, 1816 is one of the most important resources for the bottom-trawl fleets in the northwestern Mediterranean, in terms of both landings and economic value. A simple bioeconomic model introducing Game Theory for the prediction of effort dynamics at vessel level is proposed. The game is performed by the twelve vessels exploiting red shrimp in Blanes. Within the game, two solutions are performed: non-cooperation and cooperation. The first is proposed as a realistic method for the prediction of individual effort strategies and the second is used to illustrate the potential profitability of the analysed fishery. The effort strategy for each vessel is the number of fishing days per year and their objective is profit maximisation, individual profits for the non-cooperative solution and total profits for the cooperative one. In the present analysis, strategic conflicts arise from the differences between vessels in technical efficiency (catchability coefficient and economic efficiency (defined here. The ten-year and 1000-iteration stochastic simulations performed for the two effort solutions show that the best strategy from both an economic and a conservationist perspective is homogeneous effort cooperation. However, the results under non-cooperation are more similar to the observed data on effort strategies and landings.

  18. Off-Highway Gasoline Consuption Estimation Models Used in the Federal Highway Administration Attribution Process: 2008 Updates

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ho-Ling [ORNL; Davis, Stacy Cagle [ORNL

    2009-12-01

    This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the second major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that

  19. Linking effort and fishing mortality in a mixed fisheries model

    DEFF Research Database (Denmark)

    Thøgersen, Thomas Talund; Hoff, Ayoe; Frost, Hans Staby

    2012-01-01

    in fish stocks has led to overcapacity in many fisheries, leading to incentives for overfishing. Recent research has shown that the allocation of effort among fleets can play an important role in mitigating overfishing when the targeting covers a range of species (multi-species—i.e., so-called mixed...... fisheries), while simultaneously optimising the overall economic performance of the fleets. The so-called FcubEcon model, in particular, has elucidated both the biologically and economically optimal method for allocating catches—and thus effort—between fishing fleets, while ensuring that the quotas...

  20. A guide for estimating dynamic panel models: the macroeconomics models specifiness

    International Nuclear Information System (INIS)

    Coletta, Gaetano

    2005-10-01

    The aim of this paper is to review estimators for dynamic panel data models, a topic in which the interest has grown recently. As a consequence 01 this late interest, different estimation techniques have been proposed in the last few years and, given the last development of the subject, there is still a lack 01 a comprehensive guide for panel data applications, and for macroeconomics panel data models in particular. Finally, we also provide some indications about the Stata software commands to estimate dynamic panel data models with the techniques illustrated in the paper [it

  1. Assessing angler effort, catch, and harvest and the efficacy of a use-estimation system on a multi-lake fishery in middle Georgia

    Science.gov (United States)

    Roop, Hunter J.; Poudyal, Neelam C.; Jennings, Cecil A.

    2018-01-01

    Creel surveys are valuable tools in recreational fisheries management. However, multiple‐impoundment fisheries of complex spatial structure can complicate survey designs and pose logistical challenges for management agencies. Marben Public Fishing Area in Mansfield, GA is a multi‐impoundment fishery with many access points, and these features prevent or complicate use of traditional on‐site contact methods such as standard roving‐ or access‐point designs because many anglers may be missed during the survey process. Therefore, adaptation of a traditional survey method is often required for sampling this special case of multi‐lake fisheries to develop an accurate fishery profile. Accordingly, a modified non‐uniform probability roving creel survey was conducted at the Marben PFA during 2013 to estimate fishery characteristics relating to fishing effort, catch, and fish harvest. Monthly fishing effort averaged 7,523 angler‐hours (h) (SD = 5,956) and ranged from 1,301 h (SD = 562) in December to 21,856 h (SD = 5909) in May. A generalized linear mixed model was used to determine that angler catch and harvest rates were significantly higher in the spring and summer (all p < 0.05) than in the other seasons, but did not vary by fishing location. Our results demonstrate the utility of modifying existing creel methodology for monitoring small, spatially complex, intensely managed impoundments that support quality recreational fisheries and provide a template for the assessment and management of similar regional fisheries.

  2. Testing effort dependent software reliability model for imperfect debugging process considering both detection and correction

    International Nuclear Information System (INIS)

    Peng, R.; Li, Y.F.; Zhang, W.J.; Hu, Q.P.

    2014-01-01

    This paper studies the fault detection process (FDP) and fault correction process (FCP) with the incorporation of testing effort function and imperfect debugging. In order to ensure high reliability, it is essential for software to undergo a testing phase, during which faults can be detected and corrected by debuggers. The testing resource allocation during this phase, which is usually depicted by the testing effort function, considerably influences not only the fault detection rate but also the time to correct a detected fault. In addition, testing is usually far from perfect such that new faults may be introduced. In this paper, we first show how to incorporate testing effort function and fault introduction into FDP and then develop FCP as delayed FDP with a correction effort. Various specific paired FDP and FCP models are obtained based on different assumptions of fault introduction and correction effort. An illustrative example is presented. The optimal release policy under different criteria is also discussed

  3. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  4. Economic effort management in multispecies fisheries: the FcubEcon model

    DEFF Research Database (Denmark)

    Hoff, Ayoe; Frost, Hans; Ulrich, Clara

    2010-01-01

    in the development of management tools based on fleets, fisheries, and areas, rather than on unit fish stocks. A natural consequence of this has been to consider effort rather than quota management, a final effort decision being based on fleet-harvest potential and fish-stock-preservation considerations. Effort...... allocation between fleets should not be based on biological considerations alone, but also on the economic behaviour of fishers, because fisheries management has a significant impact on human behaviour as well as on ecosystem development. The FcubEcon management framework for effort allocation between fleets...... the past decade, increased focus on this issue has resulted in the development of management tools based on fleets, fisheries, and areas, rather than on unit fish stocks. A natural consequence of this has been to consider effort rather than quota management, a final effort decision being based on fleet...

  5. How do farm models compare when estimating greenhouse gas emissions from dairy cattle production?

    DEFF Research Database (Denmark)

    Hutchings, Nicholas John; Özkan, Şeyda; de Haan, M

    2018-01-01

    The European Union Effort Sharing Regulation (ESR) will require a 30% reduction in greenhouse gas (GHG) emissions by 2030 compared with 2005 from the sectors not included in the European Emissions Trading Scheme, including agriculture. This will require the estimation of current and future...... from four farm-scale models (DairyWise, FarmAC, HolosNor and SFARMMOD) were calculated for eight dairy farming scenarios within a factorial design consisting of two climates (cool/dry and warm/wet)×two soil types (sandy and clayey)×two feeding systems (grass only and grass/maize). The milk yield per...

  6. Estimating bat and bird mortality occurring at wind energy turbines from covariates and carcass searches using mixture models.

    Science.gov (United States)

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

  7. Estimating bat and bird mortality occurring at wind energy turbines from covariates and carcass searches using mixture models.

    Directory of Open Access Journals (Sweden)

    Fränzi Korner-Nievergelt

    Full Text Available Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

  8. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    Science.gov (United States)

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  9. Modeling the environmental suitability of anthrax in Ghana and estimating populations at risk: Implications for vaccination and control.

    Science.gov (United States)

    Kracalik, Ian T; Kenu, Ernest; Ayamdooh, Evans Nsoh; Allegye-Cudjoe, Emmanuel; Polkuu, Paul Nokuma; Frimpong, Joseph Asamoah; Nyarko, Kofi Mensah; Bower, William A; Traxler, Rita; Blackburn, Jason K

    2017-10-01

    Anthrax is hyper-endemic in West Africa. Despite the effectiveness of livestock vaccines in controlling anthrax, underreporting, logistics, and limited resources makes implementing vaccination campaigns difficult. To better understand the geographic limits of anthrax, elucidate environmental factors related to its occurrence, and identify human and livestock populations at risk, we developed predictive models of the environmental suitability of anthrax in Ghana. We obtained data on the location and date of livestock anthrax from veterinary and outbreak response records in Ghana during 2005-2016, as well as livestock vaccination registers and population estimates of characteristically high-risk groups. To predict the environmental suitability of anthrax, we used an ensemble of random forest (RF) models built using a combination of climatic and environmental factors. From 2005 through the first six months of 2016, there were 67 anthrax outbreaks (851 cases) in livestock; outbreaks showed a seasonal peak during February through April and primarily involved cattle. There was a median of 19,709 vaccine doses [range: 0-175 thousand] administered annually. Results from the RF model suggest a marked ecological divide separating the broad areas of environmental suitability in northern Ghana from the southern part of the country. Increasing alkaline soil pH was associated with a higher probability of anthrax occurrence. We estimated 2.2 (95% CI: 2.0, 2.5) million livestock and 805 (95% CI: 519, 890) thousand low income rural livestock keepers were located in anthrax risk areas. Based on our estimates, the current anthrax vaccination efforts in Ghana cover a fraction of the livestock potentially at risk, thus control efforts should be focused on improving vaccine coverage among high risk groups.

  10. Modeling the environmental suitability of anthrax in Ghana and estimating populations at risk: Implications for vaccination and control.

    Directory of Open Access Journals (Sweden)

    Ian T Kracalik

    2017-10-01

    Full Text Available Anthrax is hyper-endemic in West Africa. Despite the effectiveness of livestock vaccines in controlling anthrax, underreporting, logistics, and limited resources makes implementing vaccination campaigns difficult. To better understand the geographic limits of anthrax, elucidate environmental factors related to its occurrence, and identify human and livestock populations at risk, we developed predictive models of the environmental suitability of anthrax in Ghana. We obtained data on the location and date of livestock anthrax from veterinary and outbreak response records in Ghana during 2005-2016, as well as livestock vaccination registers and population estimates of characteristically high-risk groups. To predict the environmental suitability of anthrax, we used an ensemble of random forest (RF models built using a combination of climatic and environmental factors. From 2005 through the first six months of 2016, there were 67 anthrax outbreaks (851 cases in livestock; outbreaks showed a seasonal peak during February through April and primarily involved cattle. There was a median of 19,709 vaccine doses [range: 0-175 thousand] administered annually. Results from the RF model suggest a marked ecological divide separating the broad areas of environmental suitability in northern Ghana from the southern part of the country. Increasing alkaline soil pH was associated with a higher probability of anthrax occurrence. We estimated 2.2 (95% CI: 2.0, 2.5 million livestock and 805 (95% CI: 519, 890 thousand low income rural livestock keepers were located in anthrax risk areas. Based on our estimates, the current anthrax vaccination efforts in Ghana cover a fraction of the livestock potentially at risk, thus control efforts should be focused on improving vaccine coverage among high risk groups.

  11. Model-Based Estimation of Ankle Joint Stiffness

    Directory of Open Access Journals (Sweden)

    Berno J. E. Misgeld

    2017-03-01

    Full Text Available We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  12. Model-Based Estimation of Ankle Joint Stiffness

    Science.gov (United States)

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  13. Multinomial N-mixture models improve the applicability of electrofishing for developing population estimates of stream-dwelling Smallmouth Bass

    Science.gov (United States)

    Mollenhauer, Robert; Brewer, Shannon K.

    2017-01-01

    Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the

  14. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  15. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  16. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  17. Predicting Motivation: Computational Models of PFC Can Explain Neural Coding of Motivation and Effort-based Decision-making in Health and Disease.

    Science.gov (United States)

    Vassena, Eliana; Deraeve, James; Alexander, William H

    2017-10-01

    Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO

  18. Acoustic correlate of vocal effort in spasmodic dysphonia.

    Science.gov (United States)

    Eadie, Tanya L; Stepp, Cara E

    2013-03-01

    This study characterized the relationship between relative fundamental frequency (RFF) and listeners' perceptions of vocal effort and overall spasmodic dysphonia severity in the voices of 19 individuals with adductor spasmodic dysphonia. Twenty inexperienced listeners evaluated the vocal effort and overall severity of voices using visual analog scales. The squared correlation coefficients (R2) between average vocal effort and overall severity and RFF measures were calculated as a function of the number of acoustic instances used for the RFF estimate (from 1 to 9, of a total of 9 voiced-voiceless-voiced instances). Increases in the number of acoustic instances used for the RFF average led to increases in the variance predicted by the RFF at the first cycle of voicing onset (onset RFF) in the perceptual measures; the use of 6 or more instances resulted in a stable estimate. The variance predicted by the onset RFF for vocal effort (R2 range, 0.06 to 0.43) was higher than that for overall severity (R2 range, 0.06 to 0.35). The offset RFF was not related to the perceptual measures, irrespective of the sample size. This study indicates that onset RFF measures are related to perceived vocal effort in patients with adductor spasmodic dysphonia. These results have implications for measuring outcomes in this population.

  19. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  20. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  1. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  2. Estimation of expected number of accidents and workforce unavailability through Bayesian population variability analysis and Markov-based model

    International Nuclear Information System (INIS)

    Chagas Moura, Márcio das; Azevedo, Rafael Valença; Droguett, Enrique López; Chaves, Leandro Rego; Lins, Isis Didier

    2016-01-01

    Occupational accidents pose several negative consequences to employees, employers, environment and people surrounding the locale where the accident takes place. Some types of accidents correspond to low frequency-high consequence (long sick leaves) events, and then classical statistical approaches are ineffective in these cases because the available dataset is generally sparse and contain censored recordings. In this context, we propose a Bayesian population variability method for the estimation of the distributions of the rates of accident and recovery. Given these distributions, a Markov-based model will be used to estimate the uncertainty over the expected number of accidents and the work time loss. Thus, the use of Bayesian analysis along with the Markov approach aims at investigating future trends regarding occupational accidents in a workplace as well as enabling a better management of the labor force and prevention efforts. One application example is presented in order to validate the proposed approach; this case uses available data gathered from a hydropower company in Brazil. - Highlights: • This paper proposes a Bayesian method to estimate rates of accident and recovery. • The model requires simple data likely to be available in the company database. • These results show the proposed model is not too sensitive to the prior estimates.

  3. Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2007-07-01

    Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.

  4. Cokriging model for estimation of water table elevation

    International Nuclear Information System (INIS)

    Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.

    1989-01-01

    In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model

  5. Terry Turbopump Analytical Modeling Efforts in Fiscal Year 2016 ? Progress Report.

    Energy Technology Data Exchange (ETDEWEB)

    Osborn, Douglas; Ross, Kyle; Cardoni, Jeffrey N

    2018-04-01

    This document details the Fiscal Year 2016 modeling efforts to define the true operating limitations (margins) of the Terry turbopump systems used in the nuclear industry for Milestone 3 (full-scale component experiments) and Milestone 4 (Terry turbopump basic science experiments) experiments. The overall multinational-sponsored program creates the technical basis to: (1) reduce and defer additional utility costs, (2) simplify plant operations, and (3) provide a better understanding of the true margin which could reduce overall risk of operations.

  6. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  7. Re-evaluating neonatal-age models for ungulates: does model choice affect survival estimates?

    Directory of Open Access Journals (Sweden)

    Troy W Grovenburg

    Full Text Available New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001-2009, we captured and radiocollared 174 newborn (≤24-hrs old ungulates: 76 white-tailed deer (Odocoileus virginianus in Minnesota and South Dakota, 61 mule deer (O. hemionus in California, and 37 pronghorn (Antilocapra americana in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i

  8. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  9. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  10. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  11. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  12. Estimating Fallout Building Attributes from Architectural Features and Global Earthquake Model (GEM) Building Descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Michael B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kane, Staci R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-03-01

    A nuclear explosion has the potential to injure or kill tens to hundreds of thousands (or more) of people through exposure to fallout (external gamma) radiation. Existing buildings can protect their occupants (reducing fallout radiation exposures) by placing material and distance between fallout particles and individuals indoors. Prior efforts have determined an initial set of building attributes suitable to reasonably assess a given building’s protection against fallout radiation. The current work provides methods to determine the quantitative values for these attributes from (a) common architectural features and data and (b) buildings described using the Global Earthquake Model (GEM) taxonomy. These methods will be used to improve estimates of fallout protection for operational US Department of Defense (DoD) and US Department of Energy (DOE) consequence assessment models.

  13. Modeling air pollutant emissions from Indian auto-rickshaws: Model development and implications for fleet emission rate estimates

    Science.gov (United States)

    Grieshop, Andrew P.; Boland, Daniel; Reynolds, Conor C. O.; Gouge, Brian; Apte, Joshua S.; Rogak, Steven N.; Kandlikar, Milind

    2012-04-01

    Chassis dynamometer tests were conducted on 40 Indian auto-rickshaws with 3 different fuel-engine combinations operating on the Indian Drive Cycle (IDC). Second-by-second (1 Hz) data were collected and used to develop velocity-acceleration look-up table models for fuel consumption and emissions of CO2, CO, total hydrocarbons (THC), oxides of nitrogen (NOx) and fine particulate matter (PM2.5) for each fuel-engine combination. Models were constructed based on group-average vehicle activity and emissions data in order to represent the performance of a 'typical' vehicle. The models accurately estimated full-cycle emissions for most species, though pollutants with more variable emission rates (e.g., PM2.5) were associated with larger errors. Vehicle emissions data showed large variability for single vehicles ('intra-vehicle variability') and within the test group ('inter-vehicle variability'), complicating the development of a single model to represent a vehicle population. To evaluate the impact of this variability, sensitivity analyses were conducted using vehicle activity data other than the IDC as model input. Inter-vehicle variability dominated the uncertainty in vehicle emission modeling. 'Leave-one-out' analyses indicated that the model outputs were relatively insensitive to the specific sample of vehicles and that the vehicle samples were likely a reasonable representation of the Delhi fleet. Intra-vehicle variability in emissions was also substantial, though had a relatively minor impact on model performance. The models were used to assess whether the IDC, used for emission factor development in India, accurately represents emissions from on-road driving. Modeling based on Global Positioning System (GPS) activity data from real-world auto-rickshaws suggests that, relative to on-road vehicles in Delhi, the IDC systematically under-estimates fuel use and emissions; real-word auto-rickshaws consume 15% more fuel and emit 49% more THC and 16% more PM2.5. The models

  14. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  16. Parameter Estimation of Nonlinear Models in Forestry.

    OpenAIRE

    Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

    1999-01-01

    Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

  17. Near infrared spectroscopy to estimate the temperature reached on burned soils: strategies to develop robust models.

    Science.gov (United States)

    Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob

    2014-05-01

    The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust

  18. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  19. Estimating abundance of mountain lions from unstructured spatial sampling

    Science.gov (United States)

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and

  20. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  1. Estimating Lead (Pb) Bioavailability In A Mouse Model

    Science.gov (United States)

    Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...

  2. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  3. Estimating rates of local species extinction, colonization and turnover in animal communities

    Science.gov (United States)

    Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.

    1998-01-01

    Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.

  4. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  5. Perspectives on Modelling BIM-enabled Estimating Practices

    Directory of Open Access Journals (Sweden)

    Willy Sher

    2014-12-01

    Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes.  It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management.  Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality.  Areas for future research are also identified in the paper.

  6. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Advanced fuel cycle cost estimation model and its cost estimation results for three nuclear fuel cycles using a dynamic model in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sungki, E-mail: sgkim1@kaeri.re.kr [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Ko, Wonil [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Youn, Saerom; Gao, Ruxing [University of Science and Technology, 217 Gajungro, Yuseong-gu, Daejeon 305-350 (Korea, Republic of); Bang, Sungsig, E-mail: ssbang@kaist.ac.kr [Korea Advanced Institute of Science and Technology, Department of Business and Technology Management, 291 Deahak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2015-11-15

    Highlights: • The nuclear fuel cycle cost using a new cost estimation model was analyzed. • The material flows of three nuclear fuel cycle options were calculated. • The generation cost of once-through was estimated to be 66.88 mills/kW h. • The generation cost of pyro-SFR recycling was estimated to be 78.06 mills/kW h. • The reactor cost was identified as the main cost driver of pyro-SFR recycling. - Abstract: The present study analyzes advanced nuclear fuel cycle cost estimation models such as the different discount rate model and its cost estimation results. To do so, an analysis of the nuclear fuel cycle cost of three options (direct disposal (once through), PWR–MOX (Mixed OXide fuel), and Pyro-SFR (Sodium-cooled Fast Reactor)) from the viewpoint of economic sense, focusing on the cost estimation model, was conducted using a dynamic model. From an analysis of the fuel cycle cost estimation results, it was found that some cost gap exists between the traditional same discount rate model and the advanced different discount rate model. However, this gap does not change the priority of the nuclear fuel cycle option from the viewpoint of economics. In addition, the fuel cycle costs of OT (Once-Through) and Pyro-SFR recycling based on the most likely value using a probabilistic cost estimation except for reactor costs were calculated to be 8.75 mills/kW h and 8.30 mills/kW h, respectively. Namely, the Pyro-SFR recycling option was more economical than the direct disposal option. However, if the reactor cost is considered, the economic sense in the generation cost between the two options (direct disposal vs. Pyro-SFR recycling) can be changed because of the high reactor cost of an SFR.

  8. Linear regressive model structures for estimation and prediction of compartmental diffusive systems

    NARCIS (Netherlands)

    Vries, D; Keesman, K.J.; Zwart, Heiko J.

    In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state space

  9. Linear regressive model structures for estimation and prediction of compartmental diffusive systems

    NARCIS (Netherlands)

    Vries, D.; Keesman, K.J.; Zwart, H.

    2006-01-01

    Abstract In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state

  10. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  11. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  12. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  13. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  14. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  15. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  16. How do marital status, work effort, and wage rates interact?

    Science.gov (United States)

    Ahituv, Avner; Lerman, Robert I

    2007-08-01

    How marital status interacts with men's earnings is an important analytic and policy issue, especially in the context of debates in the United States over programs that encourage healthy marriage. This paper generates new findings about the earnings-marriage relationship by estimating the linkages among flows into and out of marriage, work effort, and wage rates. The estimates are based on National Longitudinal Survey of Youth panel data, covering 23 years of marital and labor market outcomes, and control for unobserved heterogeneity. We estimate marriage effects on hours worked (our proxy for work effort) and on wage rates for all men and for black and low-skilled men separately. The estimates reveal that entering marriage raises hours worked quickly and substantially but that marriage's effect on wage rates takes place more slowly while men continue in marriage. Together; the stimulus to hours worked and wage rates generates an 18%-19% increase in earnings, with about one-third to one-half of the marriage earnings premium attributable to higher work effort. At the same time, higher wage rates and hours worked encourage men to marry and to stay married. Thus, being married and having high earnings reinforce each other over time.

  17. Limitations to estimating bacterial cross-speciestransmission using genetic and genomic markers: inferencesfrom simulation modeling

    Science.gov (United States)

    Julio Andre, Benavides; Cross, Paul C.; Luikart, Gordon; Scott, Creel

    2014-01-01

    Cross-species transmission (CST) of bacterial pathogens has major implications for human health, livestock, and wildlife management because it determines whether control actions in one species may have subsequent effects on other potential host species. The study of bacterial transmission has benefitted from methods measuring two types of genetic variation: variable number of tandem repeats (VNTRs) and single nucleotide polymorphisms (SNPs). However, it is unclear whether these data can distinguish between different epidemiological scenarios. We used a simulation model with two host species and known transmission rates (within and between species) to evaluate the utility of these markers for inferring CST. We found that CST estimates are biased for a wide range of parameters when based on VNTRs and a most parsimonious reconstructed phylogeny. However, estimations of CST rates lower than 5% can be achieved with relatively low bias using as low as 250 SNPs. CST estimates are sensitive to several parameters, including the number of mutations accumulated since introduction, stochasticity, the genetic difference of strains introduced, and the sampling effort. Our results suggest that, even with whole-genome sequences, unbiased estimates of CST will be difficult when sampling is limited, mutation rates are low, or for pathogens that were recently introduced.

  18. Estimating actigraphy from motion artifacts in ECG and respiratory effort signals.

    Science.gov (United States)

    Fonseca, Pedro; Aarts, Ronald M; Long, Xi; Rolink, Jérôme; Leonhardt, Steffen

    2016-01-01

    Recent work in unobtrusive sleep/wake classification has shown that cardiac and respiratory features can help improve classification performance. Nevertheless, actigraphy remains the single most discriminative modality for this task. Unfortunately, it requires the use of dedicated devices in addition to the sensors used to measure electrocardiogram (ECG) or respiratory effort. This paper proposes a method to estimate actigraphy from the body movement artifacts present in the ECG and respiratory inductance plethysmography (RIP) based on the time-frequency analysis of those signals. Using a continuous wavelet transform to analyze RIP, and ECG and RIP combined, it provides a surrogate measure of actigraphy with moderate correlation (for ECG+RIP, ρ = 0.74, p  <  0.001) and agreement (mean bias ratio of 0.94 and 95% agreement ratios of 0.11 and 8.45) with reference actigraphy. More important, it can be used as a replacement of actigraphy in sleep/wake classification: after cross-validation with a data set comprising polysomnographic (PSG) recordings of 15 healthy subjects and 25 insomniacs annotated by an external sleep technician, it achieves a statistically non-inferior classification performance when used together with respiratory features (average κ of 0.64 for 15 healthy subjects, and 0.50 for a dataset with 40 healthy and insomniac subjects), and when used together with respiratory and cardiac features (average κ of 0.66 for 15 healthy subjects, and 0.56 for 40 healthy and insomniac subjects). Since this method eliminates the need for a dedicated actigraphy device, it reduces the number of sensors needed for sleep/wake classification to a single sensor when using respiratory features, and to two sensors when using respiratory and cardiac features without any loss in performance. It offers a major benefit in terms of comfort for long-term home monitoring and is immediately applicable for legacy ECG and RIP monitoring devices already used in clinical

  19. Aerosol-Radiation-Cloud Interactions in the South-East Atlantic: Model-Relevant Observations and the Beneficiary Modeling Efforts in the Realm of the EVS-2 Project ORACLES

    Science.gov (United States)

    Redemann, Jens

    2018-01-01

    Globally, aerosols remain a major contributor to uncertainties in assessments of anthropogenically-induced changes to the Earth climate system, despite concerted efforts using satellite and suborbital observations and increasingly sophisticated models. The quantification of direct and indirect aerosol radiative effects, as well as cloud adjustments thereto, even at regional scales, continues to elude our capabilities. Some of our limitations are due to insufficient sampling and accuracy of the relevant observables, under an appropriate range of conditions to provide useful constraints for modeling efforts at various climate scales. In this talk, I will describe (1) the efforts of our group at NASA Ames to develop new airborne instrumentation to address some of the data insufficiencies mentioned above; (2) the efforts by the EVS-2 ORACLES project to address aerosol-cloud-climate interactions in the SE Atlantic and (3) time permitting, recent results from a synergistic use of A-Train aerosol data to test climate model simulations of present-day direct radiative effects in some of the AEROCOM phase II global climate models.

  20. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  1. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  2. Estimating temporal trend in the presence of spatial complexity: a Bayesian hierarchical model for a wetland plant population undergoing restoration.

    Directory of Open Access Journals (Sweden)

    Thomas J Rodhouse

    Full Text Available Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas] population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones" with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity--a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.

  3. Estimating varying coefficients for partial differential equation models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2017-09-01

    Partial differential equations (PDEs) are used to model complex dynamical systems in multiple dimensions, and their parameters often have important scientific interpretations. In some applications, PDE parameters are not constant but can change depending on the values of covariates, a feature that we call varying coefficients. We propose a parameter cascading method to estimate varying coefficients in PDE models from noisy data. Our estimates of the varying coefficients are shown to be consistent and asymptotically normally distributed. The performance of our method is evaluated by a simulation study and by an empirical study estimating three varying coefficients in a PDE model arising from LIDAR data. © 2017, The International Biometric Society.

  4. Coupling Hydrologic and Hydrodynamic Models to Estimate PMF

    Science.gov (United States)

    Felder, G.; Weingartner, R.

    2015-12-01

    Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.

  5. Effort, anhedonia, and function in schizophrenia: reduced effort allocation predicts amotivation and functional impairment.

    Science.gov (United States)

    Barch, Deanna M; Treadway, Michael T; Schoen, Nathan

    2014-05-01

    One of the most debilitating aspects of schizophrenia is an apparent interest in or ability to exert effort for rewards. Such "negative symptoms" may prevent individuals from obtaining potentially beneficial outcomes in educational, occupational, or social domains. In animal models, dopamine abnormalities decrease willingness to work for rewards, implicating dopamine (DA) function as a candidate substrate for negative symptoms given that schizophrenia involves dysregulation of the dopamine system. We used the effort-expenditure for rewards task (EEfRT) to assess the degree to which individuals with schizophrenia were wiling to exert increased effort for either larger magnitude rewards or for rewards that were more probable. Fifty-nine individuals with schizophrenia and 39 demographically similar controls performed the EEfRT task, which involves making choices between "easy" and "hard" tasks to earn potential rewards. Individuals with schizophrenia showed less of an increase in effort allocation as either reward magnitude or probability increased. In controls, the frequency of choosing the hard task in high reward magnitude and probability conditions was negatively correlated with depression severity and anhedonia. In schizophrenia, fewer hard task choices were associated with more severe negative symptoms and worse community and work function as assessed by a caretaker. Consistent with patterns of disrupted dopamine functioning observed in animal models of schizophrenia, these results suggest that 1 mechanism contributing to impaired function and motivational drive in schizophrenia may be a reduced allocation of greater effort for higher magnitude or higher probability rewards.

  6. Reasonable limits to radiation protection efforts

    International Nuclear Information System (INIS)

    Gonen, Y.G.

    1982-01-01

    It is shown that change in life expectancy (ΔLE) is an improved estimate for risks and safety efforts, reflecting the relevant social goal. A cost-effectiveness index, safety investment/ΔLE, is defined. The harm from low level radiation is seen as a reduction of life expectancy instead of an increased probability of contracting cancer. (author)

  7. Neurodynamic evaluation of hearing aid features using EEG correlates of listening effort.

    Science.gov (United States)

    Bernarding, Corinna; Strauss, Daniel J; Hannemann, Ronny; Seidler, Harald; Corona-Strauss, Farah I

    2017-06-01

    In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.

  8. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    Science.gov (United States)

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  9. Applicability of models to estimate traffic noise for urban roads.

    Science.gov (United States)

    Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M

    2015-01-01

    Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.

  10. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  11. The Effort-reward Imbalance work-stress model and daytime salivary cortisol and dehydroepiandrosterone (DHEA) among Japanese women

    Science.gov (United States)

    Ota, Atsuhiko; Mase, Junji; Howteerakul, Nopporn; Rajatanun, Thitipat; Suwannapong, Nawarat; Yatsuya, Hiroshi; Ono, Yuichiro

    2014-01-01

    We examined the influence of work-related effort–reward imbalance and overcommitment to work (OC), as derived from Siegrist's Effort–Reward Imbalance (ERI) model, on the hypothalamic–pituitary–adrenocortical (HPA) axis. We hypothesized that, among healthy workers, both cortisol and dehydroepiandrosterone (DHEA) secretion would be increased by effort–reward imbalance and OC and, as a result, cortisol-to-DHEA ratio (C/D ratio) would not differ by effort–reward imbalance or OC. The subjects were 115 healthy female nursery school teachers. Salivary cortisol, DHEA, and C/D ratio were used as indexes of HPA activity. Mixed-model analyses of variance revealed that neither the interaction between the ERI model indicators (i.e., effort, reward, effort-to-reward ratio, and OC) and the series of measurement times (9:00, 12:00, and 15:00) nor the main effect of the ERI model indicators was significant for daytime salivary cortisol, DHEA, or C/D ratio. Multiple linear regression analyses indicated that none of the ERI model indicators was significantly associated with area under the curve of daytime salivary cortisol, DHEA, or C/D ratio. We found that effort, reward, effort–reward imbalance, and OC had little influence on daytime variation patterns, levels, or amounts of salivary HPA-axis-related hormones. Thus, our hypotheses were not supported. PMID:25228138

  12. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  13. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Zhao, Jiyun; Ji, Dongxu; Tseng, King Jet

    2017-01-01

    Highlights: •SOC and capacity are dually estimated with online adapted battery model. •Model identification and state dual estimate are fully decoupled. •Multiple timescales are used to improve estimation accuracy and stability. •The proposed method is verified with lab-scale experiments. •The proposed method is applicable to different battery chemistries. -- Abstract: Reliable online estimation of state of charge (SOC) and capacity is critically important for the battery management system (BMS). This paper presents a multi-timescale method for dual estimation of SOC and capacity with an online identified battery model. The model parameter estimator and the dual estimator are fully decoupled and executed with different timescales to improve the model accuracy and stability. Specifically, the model parameters are online adapted with the vector-type recursive least squares (VRLS) to address the different variation rates of them. Based on the online adapted battery model, the Kalman filter (KF)-based SOC estimator and RLS-based capacity estimator are formulated and integrated in the form of dual estimation. Experimental results suggest that the proposed method estimates the model parameters, SOC, and capacity in real time with fast convergence and high accuracy. Experiments on both lithium-ion battery and vanadium redox flow battery (VRB) verify the generality of the proposed method on multiple battery chemistries. The proposed method is also compared with other existing methods on the computational cost to reveal its superiority for practical application.

  14. Performances of some estimators of linear model with ...

    African Journals Online (AJOL)

    The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...

  15. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  16. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  17. A General Model for Estimating Macroevolutionary Landscapes.

    Science.gov (United States)

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  18. Hindsight Bias Doesn't Always Come Easy: Causal Models, Cognitive Effort, and Creeping Determinism

    Science.gov (United States)

    Nestler, Steffen; Blank, Hartmut; von Collani, Gernot

    2008-01-01

    Creeping determinism, a form of hindsight bias, refers to people's hindsight perceptions of events as being determined or inevitable. This article proposes, on the basis of a causal-model theory of creeping determinism, that the underlying processes are effortful, and hence creeping determinism should disappear when individuals lack the cognitive…

  19. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    Science.gov (United States)

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  20. Work more, then feel more: the influence of effort on affective predictions.

    Directory of Open Access Journals (Sweden)

    Gabriela M Jiga-Boy

    Full Text Available Two studies examined how effort invested in a task shapes the affective predictions related to potential success in that task, and the mechanism underlying this relationship. In Study 1, PhD students awaiting an editorial decision about a submitted manuscript estimated the effort they had invested in preparing that manuscript for submission and how happy they would feel if it were accepted. Subjective estimates of effort were positively related to participants' anticipated happiness, an effect mediated by the higher perceived quality of one's work. In other words, the more effort one though having invested, the happier one expected to feel if it were accepted, because one expected a higher quality manuscript. We replicated this effect and its underlying mediation in Study 2, this time using an experimental manipulation of effort in the context of creating an advertising slogan. Study 2 further showed that participants mistakenly thought their extra efforts invested in the task had improved the quality of their work, while independent judges had found no objective differences in quality between the outcomes of the high- and low-effort groups. We discuss the implications of the relationship between effort and anticipated emotions and the conditions under which such relationship might be functional.

  1. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  2. Using satellite-based rainfall estimates for streamflow modelling: Bagmati Basin

    Science.gov (United States)

    Shrestha, M.S.; Artan, Guleid A.; Bajracharya, S.R.; Sharma, R. R.

    2008-01-01

    In this study, we have described a hydrologic modelling system that uses satellite-based rainfall estimates and weather forecast data for the Bagmati River Basin of Nepal. The hydrologic model described is the US Geological Survey (USGS) Geospatial Stream Flow Model (GeoSFM). The GeoSFM is a spatially semidistributed, physically based hydrologic model. We have used the GeoSFM to estimate the streamflow of the Bagmati Basin at Pandhera Dovan hydrometric station. To determine the hydrologic connectivity, we have used the USGS Hydro1k DEM dataset. The model was forced by daily estimates of rainfall and evapotranspiration derived from weather model data. The rainfall estimates used for the modelling are those produced by the National Oceanic and Atmospheric Administration Climate Prediction Centre and observed at ground rain gauge stations. The model parameters were estimated from globally available soil and land cover datasets – the Digital Soil Map of the World by FAO and the USGS Global Land Cover dataset. The model predicted the daily streamflow at Pandhera Dovan gauging station. The comparison of the simulated and observed flows at Pandhera Dovan showed that the GeoSFM model performed well in simulating the flows of the Bagmati Basin.

  3. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  4. Employee effort - reward balance and first-level manager transformational leadership within elderly care.

    Science.gov (United States)

    Keisu, Britt-Inger; Öhman, Ann; Enberg, Birgit

    2018-03-01

    Negative aspects, staff dissatisfaction and problems related to internal organisational factors of working in elderly care are well-known and documented. Much less is known about positive aspects of working in elderly care, and therefore, this study focuses on such positive factors in Swedish elderly care. We combined two theoretical models, the effort-reward imbalance model and the Transformational Leadership Style model. The aim was to estimate the potential associations between employee-perceived transformational leadership style of their managers, and employees' ratings of effort and reward within elderly care work. The article is based on questionnaires distributed at on-site visits to registered nurses, occupational therapists, physiotherapists (high-level education) and assistant nurses (low-level education) in nine Swedish elderly care facilities. In order to grasp the positive factors of work in elderly care, we focused on balance at work, rather than imbalance. We found a significant association between employees' effort-reward balance at work and a transformational leadership style among managers. An association was also found between employees' level of education and their assessments of the first-level managers. We conclude that the first-level manager is an important actor for achieving a good workplace within elderly care, since she/he influences employees' psychosocial working environment. We also conclude that there are differences and inequalities, in terms of well-being, effort and reward at the work place, between those with academic training and those without, in that the former group to a higher degree evaluated their first-level manager to perform a transformational leadership style, which in turn is beneficial for their psychosocial work environment. Consequently, this (re)-produce inequalities in terms of well-being, effort and reward among the employees at the work place. © 2017 Nordic College of Caring Science.

  5. The problem of multicollinearity in horizontal solar radiation estimation models and a new model for Turkey

    International Nuclear Information System (INIS)

    Demirhan, Haydar

    2014-01-01

    Highlights: • Impacts of multicollinearity on solar radiation estimation models are discussed. • Accuracy of existing empirical models for Turkey is evaluated. • A new non-linear model for the estimation of average daily horizontal global solar radiation is proposed. • Estimation and prediction performance of the proposed and existing models are compared. - Abstract: Due to the considerable decrease in energy resources and increasing energy demand, solar energy is an appealing field of investment and research. There are various modelling strategies and particular models for the estimation of the amount of solar radiation reaching at a particular point over the Earth. In this article, global solar radiation estimation models are taken into account. To emphasize severity of multicollinearity problem in solar radiation estimation models, some of the models developed for Turkey are revisited. It is observed that these models have been identified as accurate under certain multicollinearity structures, and when the multicollinearity is eliminated, the accuracy of these models is controversial. Thus, a reliable model that does not suffer from multicollinearity and gives precise estimates of global solar radiation for the whole region of Turkey is necessary. A new nonlinear model for the estimation of average daily horizontal solar radiation is proposed making use of the genetic programming technique. There is no multicollinearity problem in the new model, and its estimation accuracy is better than the revisited models in terms of numerous statistical performance measures. According to the proposed model, temperature, precipitation, altitude, longitude, and monthly average daily extraterrestrial horizontal solar radiation have significant effect on the average daily global horizontal solar radiation. Relative humidity and soil temperature are not included in the model due to their high correlation with precipitation and temperature, respectively. While altitude has

  6. Effects of fishing effort allocation scenarios on energy efficiency and profitability: an individual-based model applied to Danish fisheries

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Andersen, Bo Sølgaard

    2010-01-01

    to the harbour, and (C) allocating effort towards optimising the expected area-specific profit per trip. The model is informed by data from each Danish fishing vessel >15 m after coupling its high resolution spatial and temporal effort data (VMS) with data from logbook landing declarations, sales slips, vessel...... effort allocation has actually been sub-optimal because increased profits from decreased fuel consumption and larger landings could have been obtained by applying a different spatial effort allocation. Based on recent advances in VMS and logbooks data analyses, this paper contributes to improve...

  7. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  8. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  9. [Using log-binomial model for estimating the prevalence ratio].

    Science.gov (United States)

    Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue

    2010-05-01

    To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.

  10. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  11. Estimation of p,p'-DDT degradation in soil by modeling and constraining hydrological and biogeochemical controls.

    Science.gov (United States)

    Sanka, Ondrej; Kalina, Jiri; Lin, Yan; Deutscher, Jan; Futter, Martyn; Butterfield, Dan; Melymuk, Lisa; Brabec, Karel; Nizzetto, Luca

    2018-08-01

    Despite not being used for decades in most countries, DDT remains ubiquitous in soils due to its persistence and intense past usage. Because of this it is still a pollutant of high global concern. Assessing long term dissipation of DDT from this reservoir is fundamental to understand future environmental and human exposure. Despite a large research effort, key properties controlling fate in soil (in particular, the degradation half-life (τ soil )) are far from being fully quantified. This paper describes a case study in a large central European catchment where hundreds of measurements of p,p'-DDT concentrations in air, soil, river water and sediment are available for the last two decades. The goal was to deliver an integrated estimation of τ soil by constraining a state-of-the-art hydrobiogeochemical-multimedia fate model of the catchment against the full body of empirical data available for this area. The INCA-Contaminants model was used for this scope. Good predictive performance against an (external) dataset of water and sediment concentrations was achieved with partitioning properties taken from the literature and τ soil estimates obtained from forcing the model against empirical historical data of p,p'-DDT in the catchment multicompartments. This approach allowed estimation of p,p'-DDT degradation in soil after taking adequate consideration of losses due to runoff and volatilization. Estimated τ soil ranged over 3000-3800 days. Degradation was the most important loss process, accounting on a yearly basis for more than 90% of the total dissipation. The total dissipation flux from the catchment soils was one order of magnitude higher than the total current atmospheric input estimated from atmospheric concentrations, suggesting that the bulk of p,p'-DDT currently being remobilized or lost is essentially that accumulated over two decades ago. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. A harmonized calculation model for transforming EU bottom-up energy efficiency indicators into empirical estimates of policy impacts

    International Nuclear Information System (INIS)

    Horowitz, Marvin J.; Bertoldi, Paolo

    2015-01-01

    This study is an impact analysis of European Union (EU) energy efficiency policy that employs both top-down energy consumption data and bottom-up energy efficiency statistics or indicators. As such, it may be considered a contribution to the effort called for in the EU's 2006 Energy Services Directive (ESD) to develop a harmonized calculation model. Although this study does not estimate the realized savings from individual policy measures, it does provide estimates of realized energy savings for energy efficiency policy measures in aggregate. Using fixed effects panel models, the annual cumulative savings in 2011 of combined household and manufacturing sector electricity and natural gas usage attributed to EU energy efficiency policies since 2000 is estimated to be 1136 PJ; the savings attributed to energy efficiency policies since 2006 is estimated to be 807 PJ, or the equivalent of 5.6% of 2011 EU energy consumption. As well as its contribution to energy efficiency policy analysis, this study adds to the development of methods that can improve the quality of information provided by standardized energy efficiency and sustainable resource indexes. - Highlights: • Impact analysis of European Union energy efficiency policy. • Harmonization of top-down energy consumption and bottom-up energy efficiency indicators. • Fixed effects models for Member States for household and manufacturing sectors and combined electricity and natural gas usage. • EU energy efficiency policies since 2000 are estimated to have saved 1136 Petajoules. • Energy savings attributed to energy efficiency policies since 2006 are 5.6 percent of 2011 combined electricity and natural gas usage.

  13. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  14. Benefits of multidisciplinary collaboration for earthquake casualty estimation models: recent case studies

    Science.gov (United States)

    So, E.

    2010-12-01

    Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from

  15. A shell-neutral modeling approach yields sustainable oyster harvest estimates: a retrospective analysis of the Louisiana state primary seed grounds

    Science.gov (United States)

    Soniat, Thomas M.; Klinck, John M.; Powell, Eric N.; Cooper, Nathan; Abdelguerfi, Mahdi; Hofmann, Eileen E.; Dahal, Janak; Tu, Shengru; Finigan, John; Eberline, Benjamin S.; La Peyre, Jerome F.; LaPeyre, Megan K.; Qaddoura, Fareed

    2012-01-01

    A numerical model is presented that defines a sustainability criterion as no net loss of shell, and calculates a sustainable harvest of seed (<75 mm) and sack or market oysters (≥75 mm). Stock assessments of the Primary State Seed Grounds conducted east of the Mississippi from 2009 to 2011 show a general trend toward decreasing abundance of sack and seed oysters. Retrospective simulations provide estimates of annual sustainable harvests. Comparisons of simulated sustainable harvests with actual harvests show a trend toward unsustainable harvests toward the end of the time series. Stock assessments combined with shell-neutral models can be used to estimate sustainable harvest and manage cultch through shell planting when actual harvest exceeds sustainable harvest. For exclusive restoration efforts (no fishing allowed), the model provides a metric for restoration success-namely, shell accretion. Oyster fisheries that remove shell versus reef restorations that promote shell accretion, although divergent in their goals, are convergent in their management; both require vigilant attention to shell budgets.

  16. Fundamental Frequency and Model Order Estimation Using Spatial Filtering

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment......In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...

  17. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  18. Remaining lifetime modeling using State-of-Health estimation

    Science.gov (United States)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model

  19. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  20. Federal Regulations: Efforts to Estimate Total Costs and Benefits of Rules

    Science.gov (United States)

    2004-04-07

    the Chamber of Commerce , academicians, the media, and others, and is sometimes cited with a high degree of certainty ." For example, some articles...House of Representatives, Feb . 25,2004; and testimony of William P . Kovacs, Vice President, U .S. Chamber of Commerce , before the Subcommittee on Energy...estimated the annual cost to employers of the Family and Medical Leave Act at $825 million, but that the Chamber of Commerce estimated the cost at between $3

  1. Censored rainfall modelling for estimation of fine-scale extremes

    Science.gov (United States)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  2. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  3. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  4. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  5. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  6. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  7. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  9. Job characteristics and safety climate: the role of effort-reward and demand-control-support models.

    Science.gov (United States)

    Phipps, Denham L; Malley, Christine; Ashcroft, Darren M

    2012-07-01

    While safety climate is widely recognized as a key influence on organizational safety, there remain questions about the nature of its antecedents. One potential influence on safety climate is job characteristics (that is, psychosocial features of the work environment). This study investigated the relationship between two job characteristics models--demand-control-support (Karasek & Theorell, 1990) and effort-reward imbalance (Siegrist, 1996)--and safety climate. A survey was conducted with a random sample of 860 British retail pharmacists, using the job contents questionnaire (JCQ), effort-reward imbalance indicator (ERI) and a measure of safety climate in pharmacies. Multivariate data analyses found that: (a) both models contributed to the prediction of safety climate ratings, with the demand-control-support model making the largest contribution; (b) there were some interactions between demand, control and support from the JCQ in the prediction of safety climate scores. The latter finding suggests the presence of "active learning" with respect to safety improvement in high demand, high control settings. The findings provide further insight into the ways in which job characteristics relate to safety, both individually and at an aggregated level.

  10. Systematic Identification of Stakeholders for Engagement with Systems Modeling Efforts in the Snohomish Basin, Washington, USA

    Science.gov (United States)

    Even as stakeholder engagement in systems dynamic modeling efforts is increasingly promoted, the mechanisms for identifying which stakeholders should be included are rarely documented. Accordingly, for an Environmental Protection Agency’s Triple Value Simulation (3VS) mode...

  11. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  12. Work more, then feel more : The influence of effort on affective predictions

    NARCIS (Netherlands)

    Jiga-Boy, G.M.; Toma, C.; Corneille, O.

    2014-01-01

    Two studies examined how effort invested in a task shapes the affective predictions related to potential success in that task, and the mechanism underlying this relationship. In Study 1, PhD students awaiting an editorial decision about a submitted manuscript estimated the effort they had invested

  13. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  14. A model of reward- and effort-based optimal decision making and motor control.

    Directory of Open Access Journals (Sweden)

    Lionel Rigoux

    Full Text Available Costs (e.g. energetic expenditure and benefits (e.g. food are central determinants of behavior. In ecology and economics, they are combined to form a utility function which is maximized to guide choices. This principle is widely used in neuroscience as a normative model of decision and action, but current versions of this model fail to consider how decisions are actually converted into actions (i.e. the formation of trajectories. Here, we describe an approach where decision making and motor control are optimal, iterative processes derived from the maximization of the discounted, weighted difference between expected rewards and foreseeable motor efforts. The model accounts for decision making in cost/benefit situations, and detailed characteristics of control and goal tracking in realistic motor tasks. As a normative construction, the model is relevant to address the neural bases and pathological aspects of decision making and motor control.

  15. Surface Runoff Estimation Using SMOS Observations, Rain-gauge Measurements and Satellite Precipitation Estimations. Comparison with Model Predictions

    Science.gov (United States)

    Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy

    Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.

  16. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  17. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  18. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  19. A Covariance Structure Model Test of Antecedents of Adolescent Alcohol Misuse and a Prevention Effort.

    Science.gov (United States)

    Dielman, T. E.; And Others

    1989-01-01

    Questionnaires were administered to 4,157 junior high school students to determine levels of alcohol misuse, exposure to peer use and misuse of alcohol, susceptibility to peer pressure, internal health locus of control, and self-esteem. Conceptual model of antecendents of adolescent alcohol misuse and effectiveness of a prevention effort was…

  20. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  1. Phase transitions in least-effort communications

    International Nuclear Information System (INIS)

    Prokopenko, Mikhail; Ay, Nihat; Obst, Oliver; Polani, Daniel

    2010-01-01

    We critically examine a model that attempts to explain the emergence of power laws (e.g., Zipf's law) in human language. The model is based on the principle of least effort in communications—specifically, the overall effort is balanced between the speaker effort and listener effort, with some trade-off. It has been shown that an information-theoretic interpretation of this principle is sufficiently rich to explain the emergence of Zipf's law in the vicinity of the transition between referentially useless systems (one signal for all referable objects) and indexical reference systems (one signal per object). The phase transition is defined in the space of communication accuracy (information content) expressed in terms of the trade-off parameter. Our study explicitly solves the continuous optimization problem, subsuming a recent, more specific result obtained within a discrete space. The obtained results contrast Zipf's law found by heuristic search (that attained only local minima) in the vicinity of the transition between referentially useless systems and indexical reference systems, with an inverse-factorial (sub-logarithmic) law found at the transition that corresponds to global minima. The inverse-factorial law is observed to be the most representative frequency distribution among optimal solutions

  2. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  3. Motion estimation by data assimilation in reduced dynamic models

    International Nuclear Information System (INIS)

    Drifi, Karim

    2013-01-01

    Motion estimation is a major challenge in the field of image sequence analysis. This thesis is a study of the dynamics of geophysical flows visualized by satellite imagery. Satellite image sequences are currently underused for the task of motion estimation. A good understanding of geophysical flows allows a better analysis and forecast of phenomena in domains such as oceanography and meteorology. Data assimilation provides an excellent framework for achieving a compromise between heterogeneous data, especially numerical models and observations. Hence, in this thesis we set out to apply variational data assimilation methods to estimate motion on image sequences. As one of the major drawbacks of applying these assimilation techniques is the considerable computation time and memory required, we therefore define and use a model reduction method in order to significantly decrease the necessary computation time and the memory. We then explore the possibilities that reduced models provide for motion estimation, particularly the possibility of strictly imposing some known constraints on the computed solutions. In particular, we show how to estimate a divergence free motion with boundary conditions on a complex spatial domain [fr

  4. Tax effort and oil royalties in the Brazilian municipalities

    Directory of Open Access Journals (Sweden)

    Fernando Antonio Slaibe Postali

    2015-09-01

    Full Text Available This paper estimates a stochastic production frontier, to investigate whether municipalities covered by oil royalties in the last decade have reduced their tax effort in Brazil. The issue is relevant to the prospect of a substantial increase in these revenues and the new rules for distribution of the funds, established by Law No. 12.734/2012. The inputs were provided by personnel and capital expenditures, whereas the product was defined as the municipal tax collection. With the purpose of overcoming the endogeneity problems due to reverse causality of output on inputs, we used the lagged independent variable as instruments in the inefficiency equation. The data set is composed of a panel of Brazilian municipalities from 2002 to 2011. The results indicate that oil revenues have a negative impact on the estimated efficiencies, signaling reduced fiscal effort by the benefiting municipalities.

  5. Bodily Effort Enhances Learning and Metacognition: Investigating the Relation Between Physical Effort and Cognition Using Dual-Process Models of Embodiment.

    Science.gov (United States)

    Skulmowski, Alexander; Rey, Günter Daniel

    2017-01-01

    Recent embodiment research revealed that cognitive processes can be influenced by bodily cues. Some of these cues were found to elicit disparate effects on cognition. For instance, weight sensations can inhibit problem-solving performance, but were shown to increase judgments regarding recall probability (judgments of learning; JOLs) in memory tasks. We investigated the effects of physical effort on learning and metacognition by conducting two studies in which we varied whether a backpack was worn or not while 20 nouns were to be learned. Participants entered a JOL for each word and completed a recall test. Experiment 1 ( N = 18) revealed that exerting physical effort by wearing a backpack led to higher JOLs for easy nouns, without a notable effect on difficult nouns. Participants who wore a backpack reached higher recall scores. Therefore, physical effort may act as a form of desirable difficulty during learning. In Experiment 2 ( N = 30), the influence of physical effort on JOL s and learning disappeared when more difficult nouns were to be learned, implying that a high cognitive load may diminish bodily effects. These findings suggest that physical effort mainly influences superficial modes of thought and raise doubts concerning the explanatory power of metaphor-centered accounts of embodiment for higher-level cognition.

  6. Optimal Effort in Consumer Choice : Theory and Experimental Evidence for Binary Choice

    NARCIS (Netherlands)

    Conlon, B.J.; Dellaert, B.G.C.; van Soest, A.H.O.

    2001-01-01

    This paper develops a theoretical model of optimal effort in consumer choice.The model extends previous consumer choice models in that the consumer not only chooses a product, but also decides how much effort to apply to a given choice problem.The model yields a unique optimal level of effort, which

  7. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  8. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  9. Index of Effort: An Analytical Model for Evaluating and Re-Directing Student Recruitment Activities for a Local Community College.

    Science.gov (United States)

    Landini, Albert J.

    This index of effort is proposed as a means by which those in charge of student recruitment activities at community colleges can be sure that their efforts are being directed toward all of the appropriate population. The index is an analytical model based on the concept of socio-economic profiles, using small area 1970 census data, and is the…

  10. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  11. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....

  12. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  13. Estimating the timing and location of shallow rainfall-induced landslides using a model for transient, unsaturated infiltration

    Science.gov (United States)

    Baum, Rex L.; Godt, Jonathan W.; Savage, William Z.

    2010-01-01

    Shallow rainfall-induced landslides commonly occur under conditions of transient infiltration into initially unsaturated soils. In an effort to predict the timing and location of such landslides, we developed a model of the infiltration process using a two-layer system that consists of an unsaturated zone above a saturated zone and implemented this model in a geographic information system (GIS) framework. The model links analytical solutions for transient, unsaturated, vertical infiltration above the water table to pressure-diffusion solutions for pressure changes below the water table. The solutions are coupled through a transient water table that rises as water accumulates at the base of the unsaturated zone. This scheme, though limited to simplified soil-water characteristics and moist initial conditions, greatly improves computational efficiency over numerical models in spatially distributed modeling applications. Pore pressures computed by these coupled models are subsequently used in one-dimensional slope-stability computations to estimate the timing and locations of slope failures. Applied over a digital landscape near Seattle, Washington, for an hourly rainfall history known to trigger shallow landslides, the model computes a factor of safety for each grid cell at any time during a rainstorm. The unsaturated layer attenuates and delays the rainfall-induced pore-pressure response of the model at depth, consistent with observations at an instrumented hillside near Edmonds, Washington. This attenuation results in realistic estimates of timing for the onset of slope instability (7 h earlier than observed landslides, on average). By considering the spatial distribution of physical properties, the model predicts the primary source areas of landslides.

  14. Estimation of Parameters in Mean-Reverting Stochastic Systems

    Directory of Open Access Journals (Sweden)

    Tianhai Tian

    2014-01-01

    Full Text Available Stochastic differential equation (SDE is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory of the stochastic model. Although substantial research efforts have been made to develop effective methods, it is still a challenge to infer unknown parameters in SDE models from observations that may have large variations. Using an interest rate model as a test problem, in this work we use the Bayesian inference and Markov Chain Monte Carlo method to estimate unknown parameters in SDE models.

  15. Model validation and error estimation of tsunami runup using high resolution data in Sadeng Port, Gunungkidul, Yogyakarta

    Science.gov (United States)

    Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo

    2017-07-01

    A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.

  16. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  17. Effort levels of the partners in networked manufacturing

    Science.gov (United States)

    Chai, G. R.; Cai, Z.; Su, Y. N.; Zong, S. L.; Zhai, G. Y.; Jia, J. H.

    2017-08-01

    Compared with traditional manufacturing mode, could networked manufacturing improve effort levels of the partners? What factors will affect effort level of the partners? How to encourage the partners to improve their effort levels? To answer these questions, we introduce network effect coefficient to build effort level model of the partners in networked manufacturing. The results show that (1) with the increase of the network effect in networked manufacturing, the actual effort level can go beyond the ideal level of traditional manufacturing. (2) Profit allocation based on marginal contribution rate would help improve effort levels of the partners in networked manufacturing. (3) The partners in networked manufacturing who wishes to have a larger distribution ratio must make a higher effort level, and enterprises with insufficient effort should be terminated in networked manufacturing.

  18. NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION

    Directory of Open Access Journals (Sweden)

    Roman L. Leibov

    2017-09-01

    Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented

  19. Effort testing in children: can cognitive and symptom validity measures differentiate malingered performances?

    Science.gov (United States)

    Rambo, Philip L; Callahan, Jennifer L; Hogan, Lindsey R; Hullmann, Stephanie; Wrape, Elizabeth

    2015-01-01

    Recent efforts have contributed to significant advances in the detection of malingered performances in adults during cognitive assessment. However, children's ability to purposefully underperform has received relatively little attention. The purpose of the present investigation was to examine children's performances on common intellectual measures, as well as two symptom validity measures: the Test of Memory Malingering and the Dot-Counting Test. This was accomplished through the administration of measures to children ages 6 to 12 years old in randomly assigned full-effort (control) and poor-effort (treatment) conditions. Prior to randomization, children's general intellectual functioning (i.e., IQ) was estimated via administration of the Kaufman Brief Intellectual Battery-Second Edition (KBIT-2). Multivariate analyses revealed that the conditions significantly differed on some but not all administered measures. Specifically, children's estimated IQ in the treatment condition significantly differed from the full-effort IQ initially obtained from the same children on the KBIT-2, as well as from the IQs obtained in the full-effort control condition. These findings suggest that children are fully capable of willfully underperforming during cognitive testing; however, consistent with prior investigations, some measures evidence greater sensitivity than others in evaluating effort.

  20. Effort assessment in the development of information systems projects

    Directory of Open Access Journals (Sweden)

    Živadinović Jovan

    2015-01-01

    Full Text Available There is a great lack of methods and techniques in the software development process itself, as well as the lack of the appropriate tools that would make it more efficient. The significance of the problem is repeatedly emphasized by the need to ensure a high quality of software and software-based systems. The main objective of this work is to develop and systematize the original formal procedure for assessing the development of information systems in the early stages of the software life cycle, through metrics of the data model. We calculate the metrics of data model by using data that can be read off from a base data model, which is represented with an Entity-Relationship (ER diagram that is defined with four basic concepts: entities, relationships, attributes of entities or relationships and values. The idea is to present the complexity of the process with a function of a number of these concepts and a number of attributes for entity types. Assessment techniques represent the basis for planning and successful performance of software projects. Statistical method was used in this paper and these assessment processes go under the category of empirical parametric methods, although they have some characteristics of the expert estimation method. A developed assessment process represents a step in the efforts to reach suitable measures which we would use to assess the size and complexity of the data model and also to estimate the amount of costs and resources necessary for the development of information systems. Likewise, certain metrics are developed. By being familiar with the data model, we can use these metrics to quantify characteristics of an information system as a whole in the logic design phase. Suggested metrics were tested on specific models and the results are shown here.

  1. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    Science.gov (United States)

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  2. Conditional shape models for cardiac motion estimation

    DEFF Research Database (Denmark)

    Metz, Coert; Baka, Nora; Kirisli, Hortense

    2010-01-01

    We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...

  3. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  4. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  5. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  6. Evaluation of an ARPS-based canopy flow modeling system for use in future operational smoke prediction efforts

    Science.gov (United States)

    M. T. Kiefer; S. Zhong; W. E. Heilman; J. J. Charney; X. Bian

    2013-01-01

    Efforts to develop a canopy flow modeling system based on the Advanced Regional Prediction System (ARPS) model are discussed. The standard version of ARPS is modified to account for the effect of drag forces on mean and turbulent flow through a vegetation canopy, via production and sink terms in the momentum and subgrid-scale turbulent kinetic energy (TKE) equations....

  7. Estimation development cost, study case: Quality Management System Reactor TRIGA Mark III

    International Nuclear Information System (INIS)

    Antúnez Barbosa, Tereso Antonio; Valdovinos Rosas, Rosa María; Marcial Romero, José Raymundo; Ramos Corchado, Marco Antonio; Edgar Herrera Arriaga

    2016-01-01

    The process of estimating costs in software engineering is not a simple task, it must be addressed carefully to obtain an efficient strategy to solve problems associated with the effort, cost and time of activities that are performed in the development of an information system project. In this context the main goal for both developers and customers is the cost, since developers are worry about the effort pay-load and customers are worry about the product pay-load. However, in other fields the cost of goods depends on the activity or process that is performed, thereby deduce that the main cost of the final product of a development project software project is undoubtedly its size. In this paper a comparative study of common models for estimating costs are developed. These models are used today in order to create a structured analysis to provide the necessary information about cost, time and effort for making decisions in a software development project. Finally the models are applied to a case study, which is a system called Monitorizacion Automatica del Sistema de Gestion de Calidad del Reactor TRIGA Mark III. (author)

  8. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  9. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    Science.gov (United States)

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  10. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  11. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  12. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  13. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  14. E-Model MOS Estimate Precision Improvement and Modelling of Jitter Effects

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2012-01-01

    Full Text Available This paper deals with the ITU-T E-model, which is used for non-intrusive MOS VoIP call quality estimation on IP networks. The pros of E-model are computational simplicity and usability on real-time traffic. The cons, as shown in our previous work, are the inability of E-model to reflect effects of network jitter present on real traffic flows and jitter-buffer behavior on end user devices. These effects are visible mostly on traffic over WAN, internet and radio networks and cause the E-model MOS call quality estimate to be noticeably too optimistic. In this paper, we propose a modification to E-model using previously proposed Pplef (effective packet loss using jitter and jitter-buffer model based on Pareto/D/1/K system. We subsequently perform optimization of newly added parameters reflecting jitter effects into E-model by using PESQ intrusive measurement method as a reference for selected audio codecs. Function fitting and parameter optimization is performed under varying delay, packet loss, jitter and different jitter-buffer sizes for both, correlated and uncorrelated long-tailed network traffic.

  15. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  16. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  17. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.

  18. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    Science.gov (United States)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  19. Dissociating variability and effort as determinants of coordination.

    Directory of Open Access Journals (Sweden)

    Ian O'Sullivan

    2009-04-01

    Full Text Available When coordinating movements, the nervous system often has to decide how to distribute work across a number of redundant effectors. Here, we show that humans solve this problem by trying to minimize both the variability of motor output and the effort involved. In previous studies that investigated the temporal shape of movements, these two selective pressures, despite having very different theoretical implications, could not be distinguished; because noise in the motor system increases with the motor commands, minimization of effort or variability leads to very similar predictions. When multiple effectors with different noise and effort characteristics have to be combined, however, these two cost terms can be dissociated. Here, we measure the importance of variability and effort in coordination by studying how humans share force production between two fingers. To capture variability, we identified the coefficient of variation of the index and little fingers. For effort, we used the sum of squared forces and the sum of squared forces normalized by the maximum strength of each effector. These terms were then used to predict the optimal force distribution for a task in which participants had to produce a target total force of 4-16 N, by pressing onto two isometric transducers using different combinations of fingers. By comparing the predicted distribution across fingers to the actual distribution chosen by participants, we were able to estimate the relative importance of variability and effort of 1:7, with the unnormalized effort being most important. Our results indicate that the nervous system uses multi-effector redundancy to minimize both the variability of the produced output and effort, although effort costs clearly outweighed variability costs.

  20. Simulation and modeling efforts to support decision making in healthcare supply chain management.

    Science.gov (United States)

    AbuKhousa, Eman; Al-Jaroodi, Jameela; Lazarova-Molnar, Sanja; Mohamed, Nader

    2014-01-01

    Recently, most healthcare organizations focus their attention on reducing the cost of their supply chain management (SCM) by improving the decision making pertaining processes' efficiencies. The availability of products through healthcare SCM is often a matter of life or death to the patient; therefore, trial and error approaches are not an option in this environment. Simulation and modeling (SM) has been presented as an alternative approach for supply chain managers in healthcare organizations to test solutions and to support decision making processes associated with various SCM problems. This paper presents and analyzes past SM efforts to support decision making in healthcare SCM and identifies the key challenges associated with healthcare SCM modeling. We also present and discuss emerging technologies to meet these challenges.

  1. Simulation and Modeling Efforts to Support Decision Making in Healthcare Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Eman AbuKhousa

    2014-01-01

    Full Text Available Recently, most healthcare organizations focus their attention on reducing the cost of their supply chain management (SCM by improving the decision making pertaining processes’ efficiencies. The availability of products through healthcare SCM is often a matter of life or death to the patient; therefore, trial and error approaches are not an option in this environment. Simulation and modeling (SM has been presented as an alternative approach for supply chain managers in healthcare organizations to test solutions and to support decision making processes associated with various SCM problems. This paper presents and analyzes past SM efforts to support decision making in healthcare SCM and identifies the key challenges associated with healthcare SCM modeling. We also present and discuss emerging technologies to meet these challenges.

  2. Estimating Lion Abundance using N-mixture Models for Social Species.

    Science.gov (United States)

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  3. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    Science.gov (United States)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  4. Online state of charge and model parameter co-estimation based on a novel multi-timescale estimator for vanadium redox flow battery

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Lim, Tuti Mariana; Skyllas-Kazacos, Maria; Wai, Nyunt; Tseng, King Jet

    2016-01-01

    Highlights: • Battery model parameters and SOC co-estimation is investigated. • The model parameters and OCV are decoupled and estimated independently. • Multiple timescales are adopted to improve precision and stability. • SOC is online estimated without using the open-circuit cell. • The method is robust to aging levels, flow rates, and battery chemistries. - Abstract: A key function of battery management system (BMS) is to provide accurate information of the state of charge (SOC) in real time, and this depends directly on the precise model parameterization. In this paper, a novel multi-timescale estimator is proposed to estimate the model parameters and SOC for vanadium redox flow battery (VRB) in real time. The model parameters and OCV are decoupled and estimated independently, effectively avoiding the possibility of cross interference between them. The analysis of model sensitivity, stability, and precision suggests the necessity of adopting different timescales for each estimator independently. Experiments are conducted to assess the performance of the proposed method. Results reveal that the model parameters are online adapted accurately thus the periodical calibration on them can be avoided. The online estimated terminal voltage and SOC are both benchmarked with the reference values. The proposed multi-timescale estimator has the merits of fast convergence, high precision, and good robustness against the initialization uncertainty, aging states, flow rates, and also battery chemistries.

  5. A hierarchical model for estimating density in camera-trap studies

    Science.gov (United States)

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  6. Methods to estimate irrigated reference crop evapotranspiration - a review.

    Science.gov (United States)

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  7. Working from Home - What is the Effect on Employees' Effort?

    OpenAIRE

    Rupietta, Kira; Beckmann, Michael

    2016-01-01

    This paper investigates how working from home affects employees' work effort. Employees, who have the possibility to work from home, have a high autonomy in scheduling their work and therefore are assumed to have a higher intrinsic motivation. Thus, we expect working from home to positively influence work effort of employees. For the empirical analysis we use the German Socio-Economic Panel (SOEP). To account for self-selection into working locations we use an instrumental variable (IV) estim...

  8. A Probabilistic Cost Estimation Model for Unexploded Ordnance Removal

    National Research Council Canada - National Science Library

    Poppe, Peter

    1999-01-01

    ...) contaminated sites that the services must decontaminate. Existing models for estimating the cost of UXO removal often require a high level of expertise and provide only a point estimate for the costs...

  9. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  10. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    Science.gov (United States)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  11. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  12. FUZZY MODELING BY SUCCESSIVE ESTIMATION OF RULES ...

    African Journals Online (AJOL)

    This paper presents an algorithm for automatically deriving fuzzy rules directly from a set of input-output data of a process for the purpose of modeling. The rules are extracted by a method termed successive estimation. This method is used to generate a model without truncating the number of fired rules, to within user ...

  13. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  14. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  15. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  16. Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects

    Directory of Open Access Journals (Sweden)

    Yi Hu

    2014-01-01

    Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.

  17. The problematic estimation of "imitation effects" in multilevel models

    Directory of Open Access Journals (Sweden)

    2003-09-01

    Full Text Available It seems plausible that a person's demographic behaviour may be influenced by that among other people in the community, for example because of an inclination to imitate. When estimating multilevel models from clustered individual data, some investigators might perhaps feel tempted to try to capture this effect by simply including on the right-hand side the average of the dependent variable, constructed by aggregation within the clusters. However, such modelling must be avoided. According to simulation experiments based on real fertility data from India, the estimated effect of this obviously endogenous variable can be very different from the true effect. Also the other community effect estimates can be strongly biased. An "imitation effect" can only be estimated under very special assumptions that in practice will be hard to defend.

  18. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  19. COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL

    Science.gov (United States)

    Roush, G. B.

    1994-01-01

    The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo

  20. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  1. Stochastic linear hybrid systems: Modeling, estimation, and application

    Science.gov (United States)

    Seah, Chze Eng

    Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS

  2. Errors and parameter estimation in precipitation-runoff modeling: 1. Theory

    Science.gov (United States)

    Troutman, Brent M.

    1985-01-01

    Errors in complex conceptual precipitation-runoff models may be analyzed by placing them into a statistical framework. This amounts to treating the errors as random variables and defining the probabilistic structure of the errors. By using such a framework, a large array of techniques, many of which have been presented in the statistical literature, becomes available to the modeler for quantifying and analyzing the various sources of error. A number of these techniques are reviewed in this paper, with special attention to the peculiarities of hydrologic models. Known methodologies for parameter estimation (calibration) are particularly applicable for obtaining physically meaningful estimates and for explaining how bias in runoff prediction caused by model error and input error may contribute to bias in parameter estimation.

  3. NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM ...

    African Journals Online (AJOL)

    NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM MEASURED AIR TEMPERATURE AND ... Nigerian Journal of Technology ... Solar radiation measurement is not sufficient in Nigeria for various reasons such as maintenance and ...

  4. Sparse estimation of polynomial dynamical models

    NARCIS (Netherlands)

    Toth, R.; Hjalmarsson, H.; Rojas, C.R.; Kinnaert, M.

    2012-01-01

    In many practical situations, it is highly desirable to estimate an accurate mathematical model of a real system using as few parameters as possible. This can be motivated either from appealing to a parsimony principle (Occam's razor) or from the view point of the utilization complexity in terms of

  5. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  6. Contributions in Radio Channel Sounding, Modeling, and Estimation

    DEFF Research Database (Denmark)

    Pedersen, Troels

    2009-01-01

    This thesis spans over three strongly related topics in wireless communication: channel-sounding, -modeling, and -estimation. Three main problems are addressed: optimization of spatio-temporal apertures for channel sounding; estimation of per-path power spectral densities (psds); and modeling...... relies on a ``propagation graph'' where vertices  represent scatterers and edges represent the wave propagation conditions between scatterers.  The graph has a recursive structure, which permits modeling of the transfer function of the graph. We derive a closed-form expression of the infinite......-bounce impulse response. This expression is used for simulation of the impulse response of randomly generated propagation graphs. The obtained realizations exhibit the well-observed  exponential power decay versus delay and specular-to-diffuse transition....

  7. Internal combustion engines - Modelling, estimation and control issues

    Energy Technology Data Exchange (ETDEWEB)

    Vigild, C.W.

    2001-12-01

    Alternative power-trains have become buzz words in the automotive industry in the recent past. New technologies like Lithium-Ion batteries or fuel cells combined with high efficient electrical motors show promising results. However both technologies are extremely expensive and important questions like 'How are we going to supply fuel-cells with hydrogen in an environmentally friendly way?', 'How are we going to improve the range - and recharging speed - of electrical vehicles?' and 'How will our existing infrastructure cope with such changes?' are still left unanswered. Hence, the internal combustion engine with all its shortcomings is to stay with us for the next many years. What the future will really bring in this area is uncertain, but one thing can be said for sure; the time of the pipe in - pipe out engine concept is over. Modem engines, Diesel or gasoline, have in the recent past been provided with many new technologies to improve both performance and handling and to cope with the tightening emission legislations. However, as new devices are included, the number of control inputs is also gradually increased. Hence, the control matrix dimension has grown to a considerably size, and the typical table and regression based engine calibration procedures currently in use today contain both challenging and time-consuming tasks. One way to improve understanding of engines and provide a more comprehensive picture of the control problem is by use of simplified physical modelling - one of the main thrusts of this dissertation. The application of simplified physical modelling as a foundation for engine estimation and control design is first motivated by two control applications. The control problem concerns Air/Fuel ratio control of Spark Ignition engines. Two different ways of control are presented; one based on. a model based Extended Kalman Filter updated predictor, and one based on robust H {infinity} techniques. Both controllers are

  8. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    International Nuclear Information System (INIS)

    2010-01-01

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM and FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the

  9. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    Energy Technology Data Exchange (ETDEWEB)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the

  10. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  11. Adaptive Disturbance Estimation for Offset-Free SISO Model Predictive Control

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2011-01-01

    Offset free tracking in Model Predictive Control requires estimation of unmeasured disturbances or the inclusion of an integrator. An algorithm for estimation of an unknown disturbance based on adaptive estimation with time varying forgetting is introduced and benchmarked against the classical...

  12. Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon

    Science.gov (United States)

    Nelson, Ross

    2008-01-01

    ICESat/GLAS waveform data are used to estimate biomass and carbon on a 1.27 million sq km study area. the Province of Quebec, Canada, below treeline. The same input data sets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include nonstratified and stratified versions of a multiple linear model where either biomass or (square root of) biomass serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial biomass estimates of up to 0.35 Gt (range 4.942+/-0.28 Gt to 5.29+/-0.36 Gt). The results suggest that if different predictive models are used to estimate regional carbon stocks in different epochs, e.g., y2005, y2015, one might mistakenly infer an apparent aboveground carbon "change" of, in this case, 0.18 Gt, or approximately 7% of the aboveground carbon in Quebec, due solely to the use of different predictive models. These findings argue for model consistency in future, LiDAR-based carbon monitoring programs. Regional biomass estimates from the four GLAS models are compared to ground estimates derived from an extensive network of 16,814 ground plots located in southern Quebec. Stratified models proved to be more accurate and precise than either of the two nonstratified models tested.

  13. Model independent foreground power spectrum estimation using WMAP 5-year data

    International Nuclear Information System (INIS)

    Ghosh, Tuhin; Souradeep, Tarun; Saha, Rajib; Jain, Pankaj

    2009-01-01

    In this paper, we propose and implement on WMAP 5 yr data a model independent approach of foreground power spectrum estimation for multifrequency observations of the CMB experiments. Recently, a model independent approach of CMB power spectrum estimation was proposed by Saha et al. 2006. This methodology demonstrates that the CMB power spectrum can be reliably estimated solely from WMAP data without assuming any template models for the foreground components. In the current paper, we extend this work to estimate the galactic foreground power spectrum using the WMAP 5 yr maps following a self-contained analysis. We apply the model independent method in harmonic basis to estimate the foreground power spectrum and frequency dependence of combined foregrounds. We also study the behavior of synchrotron spectral index variation over different regions of the sky. We use the full sky Haslam map as an external template to increase the degrees of freedom, while computing the synchrotron spectral index over the frequency range from 408 MHz to 94 GHz. We compare our results with those obtained from maximum entropy method foreground maps, which are formed in pixel space. We find that relative to our model independent estimates maximum entropy method maps overestimate the foreground power close to galactic plane and underestimates it at high latitudes.

  14. Estimation Parameters And Modelling Zero Inflated Negative Binomial

    Directory of Open Access Journals (Sweden)

    Cindy Cahyaning Astuti

    2016-11-01

    Full Text Available Regression analysis is used to determine relationship between one or several response variable (Y with one or several predictor variables (X. Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean. Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation. Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB. In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.

  15. Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, Douglas B.; Anderson, Dave M.; Belzer, David B.; Cort, Katherine A.; Dirks, James A.; Hostick, Donna J.

    2004-06-18

    The requirements of the Government Performance and Results Act (GPRA) of 1993 mandate the reporting of outcomes expected to result from programs of the Federal government. The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official metrics for its 11 major programs using its Office of Planning, Budget Formulation, and Analysis (OPBFA). OPBFA conducts an annual integrated modeling analysis to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. Two of EERE’s major programs include the Building Technologies Program (BT) and Office of Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports the OPBFA effort by developing the program characterizations and other market information affecting these programs that is necessary to provide input to the EERE integrated modeling analysis. Throughout the report we refer to these programs as “buildings-related” programs, because the approach is not limited in application to BT or WIP. To adequately support OPBFA in the development of official GPRA metrics, PNNL communicates with the various activities and projects in BT and WIP to determine how best to characterize their activities planned for the upcoming budget request. PNNL then analyzes these projects to determine what the results of the characterizations would imply for energy markets, technology markets, and consumer behavior. This is accomplished by developing nonintegrated estimates of energy, environmental, and financial benefits (i.e., outcomes) of the technologies and practices expected to result from the budget request. These characterizations and nonintegrated modeling results are provided to OPBFA as inputs to the official benefits estimates developed for the Federal Budget. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits

  16. Parameter Estimation in Probit Model for Multivariate Multinomial Response Using SMLE

    Directory of Open Access Journals (Sweden)

    Jaka Nugraha

    2012-02-01

    Full Text Available In  the  research  field  of  transportation,  market  research and  politics,  often involving  the  response  of  the multinomial multivariate  observations.  In  this  paper, we discused  a  modeling  of  multivariate  multinomial  responses  using  probit  model.  The estimated  parameters  were  calculated  using Maximum  Likelihood  Estimations  (MLE based  on  the  GHK  simulation.  method  known  as Simulated  Maximum  Likelihood Estimations (SMLE. Likelihood function on the Probit model contains probability values that must be resolved by simulation. By using  the GHK simulation algorithm,  the estimator equation has been obtained for the parameters in the model Probit  Keywords : Probit Model, Newton-Raphson Iteration,  GHK simulator, MLE, simulated log-likelihood

  17. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  18. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  19. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  20. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  1. Computational Intelligence in Software Cost Estimation: Evolving Conditional Sets of Effort Value Ranges

    OpenAIRE

    Papatheocharous, Efi; Andreou, Andreas S.

    2008-01-01

    In this approach we aimed at addressing the problem of large variances found in available historical data that are used in software cost estimation. Project data is expensive to collect, manage and maintain. Therefore, if we wish to lower the dependence of the estimation to

  2. An estimation of the accident sequence of the LOCA groups for the PSA model of the KSNP

    International Nuclear Information System (INIS)

    Han, Seok Jung; Yang, Joon Eon

    2004-01-01

    A new trend of the probabilistic safety assessment (PSA) technology is to improve and enhance the current PSA model to be adequate for risk-informed applications (RIA). Requirements of a PSA model for the RIA are summarized as (1) reduction of the conservatism in the model utilizing all available information and (2) consideration of the specific features of a plant as designed, as operated. This is because the PSA based on conservatism and insufficient consideration of the plant-specific features resulted in a shadow effect on the assessment results. When a PSA model is used in a risk-informed application, more precise risk-information is more helpful to decision making process, so the reduction of the conservatism and the consideration of the plant-specific features in a PSA model are the most essential elements. Recently, an effort has been performed to modify the current PSA model for the Korea Standard Nuclear Power plant (KSNP) to be used in risk-informed applications. A re-estimation of the accident sequence of the loss of coolant accident (LOCA) groups for the PSA model of the KSNP has been performed

  3. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2013-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  4. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  5. Vision-based stress estimation model for steel frame structures with rigid links

    Science.gov (United States)

    Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan

    2017-07-01

    This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.

  6. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  7. Estimation of stature from sternum - Exploring the quadratic models.

    Science.gov (United States)

    Saraf, Ashish; Kanchan, Tanuj; Krishan, Kewal; Ateriya, Navneet; Setia, Puneet

    2018-04-14

    Identification of the dead is significant in examination of unknown, decomposed and mutilated human remains. Establishing the biological profile is the central issue in such a scenario, and stature estimation remains one of the important criteria in this regard. The present study was undertaken to estimate stature from different parts of the sternum. A sample of 100 sterna was obtained from individuals during the medicolegal autopsies. Length of the deceased and various measurements of the sternum were measured. Student's t-test was performed to find the sex differences in stature and sternal measurements included in the study. Correlation between stature and sternal measurements were analysed using Karl Pearson's correlation, and linear and quadratic regression models were derived. All the measurements were found to be significantly larger in males than females. Stature correlated best with the combined length of sternum, among males (R = 0.894), females (R = 0.859), and for the total sample (R = 0.891). The study showed that the models derived for stature estimation from combined length of sternum are likely to give the most accurate estimates of stature in forensic case work when compared to manubrium and mesosternum. Accuracy of stature estimation further increased with quadratic models derived for the mesosternum among males and combined length of sternum among males and females when compared to linear regression models. Future studies in different geographical locations and a larger sample size are proposed to confirm the study observations. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  8. Reproductive effort in viscous populations

    NARCIS (Netherlands)

    Pen, Ido

    Here I study a kin selection model of reproductive effort, the allocation of resources to fecundity versus survival, in a patch-structured population. Breeding females remain in the same patch for life. Offspring have costly, partial long-distance dispersal and compete for breeding sites, which

  9. Review Genetic prediction models and heritability estimates for ...

    African Journals Online (AJOL)

    edward

    2015-05-09

    May 9, 2015 ... Instead, through stepwise inclusion of type traits in the PH model, the .... Great Britain uses a bivariate animal model for all breeds, ... Štípková, 2012) and then applying linear models to the combined datasets with the ..... multivariate analyses, it is difficult to use indicator traits to estimate longevity early in life ...

  10. Application of isotopic information for estimating parameters in Philip infiltration model

    Directory of Open Access Journals (Sweden)

    Tao Wang

    2016-10-01

    Full Text Available Minimizing parameter uncertainty is crucial in the application of hydrologic models. Isotopic information in various hydrologic components of the water cycle can expand our knowledge of the dynamics of water flow in the system, provide additional information for parameter estimation, and improve parameter identifiability. This study combined the Philip infiltration model with an isotopic mixing model using an isotopic mass balance approach for estimating parameters in the Philip infiltration model. Two approaches to parameter estimation were compared: (a using isotopic information to determine the soil water transmission and then hydrologic information to estimate the soil sorptivity, and (b using hydrologic information to determine the soil water transmission and the soil sorptivity. Results of parameter estimation were verified through a rainfall infiltration experiment in a laboratory under rainfall with constant isotopic compositions and uniform initial soil water content conditions. Experimental results showed that approach (a, using isotopic and hydrologic information, estimated the soil water transmission in the Philip infiltration model in a manner that matched measured values well. The results of parameter estimation of approach (a were better than those of approach (b. It was also found that the analytical precision of hydrogen and oxygen stable isotopes had a significant effect on parameter estimation using isotopic information.

  11. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential.

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A; Jolliet, Olivier; Georgopoulos, Panos G; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A; Vallero, Daniel A

    2013-08-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA's need to develop novel approaches and tools for rapidly prioritizing chemicals, a "Challenge" was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA's effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A.; Jolliet, Olivier; Georgopoulos, Panos G.; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A.; Vallero, Daniel A.

    2014-01-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA’s need to develop novel approaches and tools for rapidly prioritizing chemicals, a “Challenge” was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA’s effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. PMID:23707726

  13. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  14. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  15. A spatial method to calculate small-scale fisheries effort in data poor scenarios.

    Science.gov (United States)

    Johnson, Andrew Frederick; Moreno-Báez, Marcia; Giron-Nava, Alfredo; Corominas, Julia; Erisman, Brad; Ezcurra, Exequiel; Aburto-Oropeza, Octavio

    2017-01-01

    To gauge the collateral impacts of fishing we must know where fishing boats operate and how much they fish. Although small-scale fisheries land approximately the same amount of fish for human consumption as industrial fleets globally, methods of estimating their fishing effort are comparatively poor. We present an accessible, spatial method of calculating the effort of small-scale fisheries based on two simple measures that are available, or at least easily estimated, in even the most data-poor fisheries: the number of boats and the local coastal human population. We illustrate the method using a small-scale fisheries case study from the Gulf of California, Mexico, and show that our measure of Predicted Fishing Effort (PFE), measured as the number of boats operating in a given area per day adjusted by the number of people in local coastal populations, can accurately predict fisheries landings in the Gulf. Comparing our values of PFE to commercial fishery landings throughout the Gulf also indicates that the current number of small-scale fishing boats in the Gulf is approximately double what is required to land theoretical maximum fish biomass. Our method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This new method provides an important first step towards estimating the fishing effort of small-scale fleets globally.

  16. Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation

    NARCIS (Netherlands)

    Segers, J.; van den Akker, R.; Werker, B.J.M.

    2014-01-01

    We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of

  17. Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model.

    Science.gov (United States)

    Bolster, Carl H; Vadas, Peter A

    2013-07-01

    Models are often used to predict phosphorus (P) loss from agricultural fields. Although it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study we assessed the effect of model input error on predictions of annual P loss by the Annual P Loss Estimator (APLE) model. Our objectives were (i) to conduct a sensitivity analyses for all APLE input variables to determine which variables the model is most sensitive to, (ii) to determine whether the relatively easy-to-implement first-order approximation (FOA) method provides accurate estimates of model prediction uncertainties by comparing results with the more accurate Monte Carlo simulation (MCS) method, and (iii) to evaluate the performance of the APLE model against measured P loss data when uncertainties in model predictions and measured data are included. Our results showed that for low to moderate uncertainties in APLE input variables, the FOA method yields reasonable estimates of model prediction uncertainties, although for cases where manure solid content is between 14 and 17%, the FOA method may not be as accurate as the MCS method due to a discontinuity in the manure P loss component of APLE at a manure solid content of 15%. The estimated uncertainties in APLE predictions based on assumed errors in the input variables ranged from ±2 to 64% of the predicted value. Results from this study highlight the importance of including reasonable estimates of model uncertainty when using models to predict P loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  18. Estimated damage from the Cascadia Subduction Zone tsunami: A model comparisons using fragility curves

    Science.gov (United States)

    Wiebe, D. M.; Cox, D. T.; Chen, Y.; Weber, B. A.; Chen, Y.

    2012-12-01

    Building damage from a hypothetical Cascadia Subduction Zone tsunami was estimated using two methods and applied at the community scale. The first method applies proposed guidelines for a new ASCE 7 standard to calculate the flow depth, flow velocity, and momentum flux from a known runup limit and estimate of the total tsunami energy at the shoreline. This procedure is based on a potential energy budget, uses the energy grade line, and accounts for frictional losses. The second method utilized numerical model results from previous studies to determine maximum flow depth, velocity, and momentum flux throughout the inundation zone. The towns of Seaside and Canon Beach, Oregon, were selected for analysis due to the availability of existing data from previously published works. Fragility curves, based on the hydrodynamic features of the tsunami flow (inundation depth, flow velocity, and momentum flux) and proposed design standards from ASCE 7 were used to estimate the probability of damage to structures located within the inundations zone. The analysis proceeded at the parcel level, using tax-lot data to identify construction type (wood, steel, and reinforced-concrete) and age, which was used as a performance measure when applying the fragility curves and design standards. The overall probability of damage to civil buildings was integrated for comparison between the two methods, and also analyzed spatially for damage patterns, which could be controlled by local bathymetric features. The two methods were compared to assess the sensitivity of the results to the uncertainty in the input hydrodynamic conditions and fragility curves, and the potential advantages of each method discussed. On-going work includes coupling the results of building damage and vulnerability to an economic input output model. This model assesses trade between business sectors located inside and outside the induction zone, and is used to measure the impact to the regional economy. Results highlight

  19. The Rayleigh curve as a model for effort distribution over the life of medium scale software systems. M.S. Thesis - Maryland Univ.

    Science.gov (United States)

    Picasso, G. O.; Basili, V. R.

    1982-01-01

    It is noted that previous investigations into the applicability of Rayleigh curve model to medium scale software development efforts have met with mixed results. The results of these investigations are confirmed by analyses of runs and smoothing. The reasons for the models' failure are found in the subcycle effort data. There are four contributing factors: uniqueness of the environment studied, the influence of holidays, varying management techniques and differences in the data studied.

  20. Model calibration and parameter estimation for environmental and water resource systems

    CERN Document Server

    Sun, Ne-Zheng

    2015-01-01

    This three-part book provides a comprehensive and systematic introduction to the development of useful models for complex systems. Part 1 covers the classical inverse problem for parameter estimation in both deterministic and statistical frameworks, Part 2 is dedicated to system identification, hyperparameter estimation, and model dimension reduction, and Part 3 considers how to collect data and construct reliable models for prediction and decision-making. For the first time, topics such as multiscale inversion, stochastic field parameterization, level set method, machine learning, global sensitivity analysis, data assimilation, model uncertainty quantification, robust design, and goal-oriented modeling, are systematically described and summarized in a single book from the perspective of model inversion, and elucidated with numerical examples from environmental and water resources modeling. Readers of this book will not only learn basic concepts and methods for simple parameter estimation, but also get famili...

  1. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  2. Disturbance estimation of nuclear power plant by using reduced-order model

    International Nuclear Information System (INIS)

    Tashima, Shin-ichi; Wakabayashi, Jiro

    1983-01-01

    An estimation method is proposed of multiplex disturbances which occur in a nuclear power plant. The method is composed of two parts: (i) the identification of a simplified model of multi-input and multi-output to describe the related system response, and (ii) the design of a Kalman filter to estimate the multiplex disturbance. Concerning the simplified model, several observed signals are firstly selected as output variables which can well represent the system response caused by the disturbances. A reduced-order model is utilized for designing the disturbance estimator. This is based on the following two considerations. The first is that the disturbance is assumed to be of a quasistatic nature. The other is based on the intuition that there exist a few dominant modes between the disturbances and the selected observed signals and that most of the non-dominant modes which remain may not affect the accuracy of the disturbance estimator. The reduced-order model is furtherly transformed to a single-output model using a linear combination of the output signals, where the standard procedure of the structural identification is evaded. The parameters of the model thus transformed are calculated by the generalized least square method. As for the multiplex disturbance estimator, the Kalman filtering method is applied by compromising the following three items : (a) quick response to disturbance, (b) reduction of estimation error in the presence of observation noises, and (c) the elimination of cross-interference between the disturbances to the plant and the estimates from the Kalman filter. The effectiveness of the proposed method is verified through some computer experiments using a BWR plant simulator. (author)

  3. Quantifying commercial catch and effort of monkfish Lophius ...

    African Journals Online (AJOL)

    Catch-per-unit-effort (cpue) data of vessels targeting monkfish and sole (the two ... analysed using two different methods to construct indices of abundance. ... in Namibia to all tail-weight classes is not appropriate for the current fishery and needs ... Keywords: catch per unit effort, Generalized Linear Model, Lophius vaillanti, ...

  4. Online State Space Model Parameter Estimation in Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Z. Gallehdari

    2014-06-01

    The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.

  5. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  6. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  7. Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Zhongyue Zou

    2014-08-01

    Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.

  8. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  9. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  10. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  11. Nonlinear estimation and control of automotive drivetrains

    CERN Document Server

    Chen, Hong

    2014-01-01

    Nonlinear Estimation and Control of Automotive Drivetrains discusses the control problems involved in automotive drivetrains, particularly in hydraulic Automatic Transmission (AT), Dual Clutch Transmission (DCT) and Automated Manual Transmission (AMT). Challenging estimation and control problems, such as driveline torque estimation and gear shift control, are addressed by applying the latest nonlinear control theories, including constructive nonlinear control (Backstepping, Input-to-State Stable) and Model Predictive Control (MPC). The estimation and control performance is improved while the calibration effort is reduced significantly. The book presents many detailed examples of design processes and thus enables the readers to understand how to successfully combine purely theoretical methodologies with actual applications in vehicles. The book is intended for researchers, PhD students, control engineers and automotive engineers. Hong Chen is a professor at the State Key Laboratory of Automotive Simulation and...

  12. Exploring Spatiotemporal Trends in Commercial Fishing Effort of an Abalone Fishing Zone: A GIS-Based Hotspot Model

    Science.gov (United States)

    Jalali, M. Ali; Ierodiaconou, Daniel; Gorfine, Harry; Monk, Jacquomo; Rattray, Alex

    2015-01-01

    Assessing patterns of fisheries activity at a scale related to resource exploitation has received particular attention in recent times. However, acquiring data about the distribution and spatiotemporal allocation of catch and fishing effort in small scale benthic fisheries remains challenging. Here, we used GIS-based spatio-statistical models to investigate the footprint of commercial diving events on blacklip abalone (Haliotis rubra) stocks along the south-west coast of Victoria, Australia from 2008 to 2011. Using abalone catch data matched with GPS location we found catch per unit of fishing effort (CPUE) was not uniformly spatially and temporally distributed across the study area. Spatial autocorrelation and hotspot analysis revealed significant spatiotemporal clusters of CPUE (with distance thresholds of 100’s of meters) among years, indicating the presence of CPUE hotspots focused on specific reefs. Cumulative hotspot maps indicated that certain reef complexes were consistently targeted across years but with varying intensity, however often a relatively small proportion of the full reef extent was targeted. Integrating CPUE with remotely-sensed light detection and ranging (LiDAR) derived bathymetry data using generalized additive mixed model corroborated that fishing pressure primarily coincided with shallow, rugose and complex components of reef structures. This study demonstrates that a geospatial approach is efficient in detecting patterns and trends in commercial fishing effort and its association with seafloor characteristics. PMID:25992800

  13. Exploring Spatiotemporal Trends in Commercial Fishing Effort of an Abalone Fishing Zone: A GIS-Based Hotspot Model.

    Directory of Open Access Journals (Sweden)

    M Ali Jalali

    Full Text Available Assessing patterns of fisheries activity at a scale related to resource exploitation has received particular attention in recent times. However, acquiring data about the distribution and spatiotemporal allocation of catch and fishing effort in small scale benthic fisheries remains challenging. Here, we used GIS-based spatio-statistical models to investigate the footprint of commercial diving events on blacklip abalone (Haliotis rubra stocks along the south-west coast of Victoria, Australia from 2008 to 2011. Using abalone catch data matched with GPS location we found catch per unit of fishing effort (CPUE was not uniformly spatially and temporally distributed across the study area. Spatial autocorrelation and hotspot analysis revealed significant spatiotemporal clusters of CPUE (with distance thresholds of 100's of meters among years, indicating the presence of CPUE hotspots focused on specific reefs. Cumulative hotspot maps indicated that certain reef complexes were consistently targeted across years but with varying intensity, however often a relatively small proportion of the full reef extent was targeted. Integrating CPUE with remotely-sensed light detection and ranging (LiDAR derived bathymetry data using generalized additive mixed model corroborated that fishing pressure primarily coincided with shallow, rugose and complex components of reef structures. This study demonstrates that a geospatial approach is efficient in detecting patterns and trends in commercial fishing effort and its association with seafloor characteristics.

  14. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  15. Person Appearance Modeling and Orientation Estimation using Spherical Harmonics

    NARCIS (Netherlands)

    Liem, M.C.; Gavrila, D.M.

    2013-01-01

    We present a novel approach for the joint estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. Overall body orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical

  16. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  17. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    Science.gov (United States)

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-05

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. © 2016 The Authors.

  18. Hydrological model performance and parameter estimation in the wavelet-domain

    Directory of Open Access Journals (Sweden)

    B. Schaefli

    2009-10-01

    Full Text Available This paper proposes a method for rainfall-runoff model calibration and performance analysis in the wavelet-domain by fitting the estimated wavelet-power spectrum (a representation of the time-varying frequency content of a time series of a simulated discharge series to the one of the corresponding observed time series. As discussed in this paper, calibrating hydrological models so as to reproduce the time-varying frequency content of the observed signal can lead to different results than parameter estimation in the time-domain. Therefore, wavelet-domain parameter estimation has the potential to give new insights into model performance and to reveal model structural deficiencies. We apply the proposed method to synthetic case studies and a real-world discharge modeling case study and discuss how model diagnosis can benefit from an analysis in the wavelet-domain. The results show that for the real-world case study of precipitation – runoff modeling for a high alpine catchment, the calibrated discharge simulation captures the dynamics of the observed time series better than the results obtained through calibration in the time-domain. In addition, the wavelet-domain performance assessment of this case study highlights the frequencies that are not well reproduced by the model, which gives specific indications about how to improve the model structure.

  19. Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example

    KAUST Repository

    Allmaras, Moritz

    2013-02-07

    All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework for parameter estimation in which uncertainties about models and measurements are translated into uncertainties in estimates of parameters. This paper provides a simple, step-by-step example-starting from a physical experiment and going through all of the mathematics-to explain the use of Bayesian techniques for estimating the coefficients of gravity and air friction in the equations describing a falling body. In the experiment we dropped an object from a known height and recorded the free fall using a video camera. The video recording was analyzed frame by frame to obtain the distance the body had fallen as a function of time, including measures of uncertainty in our data that we describe as probability densities. We explain the decisions behind the various choices of probability distributions and relate them to observed phenomena. Our measured data are then combined with a mathematical model of a falling body to obtain probability densities on the space of parameters we seek to estimate. We interpret these results and discuss sources of errors in our estimation procedure. © 2013 Society for Industrial and Applied Mathematics.

  20. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  1. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.

    1992-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  2. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.; Heemstra, F.J.

    1993-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  3. Cumulative effects of restoration efforts on ecological characteristics of an open water area within the Upper Mississippi River

    Science.gov (United States)

    Gray, B.R.; Shi, W.; Houser, J.N.; Rogala, J.T.; Guan, Z.; Cochran-Biederman, J. L.

    2011-01-01

    Ecological restoration efforts in large rivers generally aim to ameliorate ecological effects associated with large-scale modification of those rivers. This study examined whether the effects of restoration efforts-specifically those of island construction-within a largely open water restoration area of the Upper Mississippi River (UMR) might be seen at the spatial scale of that 3476ha area. The cumulative effects of island construction, when observed over multiple years, were postulated to have made the restoration area increasingly similar to a positive reference area (a proximate area comprising contiguous backwater areas) and increasingly different from two negative reference areas. The negative reference areas represented the Mississippi River main channel in an area proximate to the restoration area and an open water area in a related Mississippi River reach that has seen relatively little restoration effort. Inferences on the effects of restoration were made by comparing constrained and unconstrained models of summer chlorophyll a (CHL), summer inorganic suspended solids (ISS) and counts of benthic mayfly larvae. Constrained models forced trends in means or in both means and sampling variances to become, over time, increasingly similar to those in the positive reference area and increasingly dissimilar to those in the negative reference areas. Trends were estimated over 12- (mayflies) or 14-year sampling periods, and were evaluated using model information criteria. Based on these methods, restoration effects were observed for CHL and mayflies while evidence in favour of restoration effects on ISS was equivocal. These findings suggest that the cumulative effects of island building at relatively large spatial scales within large rivers may be estimated using data from large-scale surveillance monitoring programs. Published in 2010 by John Wiley & Sons, Ltd.

  4. Small area estimation (SAE) model: Case study of poverty in West Java Province

    Science.gov (United States)

    Suhartini, Titin; Sadik, Kusman; Indahwati

    2016-02-01

    This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.

  5. Adverse health effects of high-effort/low-reward conditions.

    Science.gov (United States)

    Siegrist, J

    1996-01-01

    In addition to the person-environment fit model (J. R. French, R. D. Caplan, & R. V. Harrison, 1982) and the demand-control model (R. A. Karasek & T. Theorell, 1990), a third theoretical concept is proposed to assess adverse health effects of stressful experience at work: the effort-reward imbalance model. The focus of this model is on reciprocity of exchange in occupational life where high-cost/low-gain conditions are considered particularly stressful. Variables measuring low reward in terms of low status control (e.g., lack of promotion prospects, job insecurity) in association with high extrinsic (e.g., work pressure) or intrinsic (personal coping pattern, e.g., high need for control) effort independently predict new cardiovascular events in a prospective study on blue-collar men. Furthermore, these variables partly explain prevalence of cardiovascular risk factors (hypertension, atherogenic lipids) in 2 independent studies. Studying adverse health effects of high-effort/low-reward conditions seems well justified, especially in view of recent developments of the labor market.

  6. Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon

    Science.gov (United States)

    Nelson, Ross F.

    2010-01-01

    Ice, Cloud, and land Elevation Satellite (ICESat) / Geosciences Laser Altimeter System (GLAS) waveform data are used to estimate biomass and carbon on a 1.27 X 10(exp 6) square km study area in the Province of Quebec, Canada, below the tree line. The same input datasets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include non-stratified and stratified versions of a multiple linear model where either biomass or (biomass)(exp 0.5) serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial dry biomass estimates of up to 0.35 G, with a range of 4.94 +/- 0.28 Gt to 5.29 +/-0.36 Gt. The differences among model estimates are statistically non-significant, however, and the results demonstrate the degree to which carbon estimates vary strictly as a function of the model used to estimate regional biomass. Results also indicate that GLAS measurements become problematic with respect to height and biomass retrievals in the boreal forest when biomass values fall below 20 t/ha and when GLAS 75th percentile heights fall below 7 m.

  7. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  8. A least-effort principle based model for heterogeneous pedestrian flow considering overtaking behavior

    Science.gov (United States)

    Liu, Chi; Ye, Rui; Lian, Liping; Song, Weiguo; Zhang, Jun; Lo, Siuming

    2018-05-01

    In the context of global aging, how to design traffic facilities for a population with a different age composition is of high importance. For this purpose, we propose a model based on the least effort principle to simulate heterogeneous pedestrian flow. In the model, the pedestrian is represented by a three-disc shaped agent. We add a new parameter to realize pedestrians' preference to avoid changing their direction of movement too quickly. The model is validated with numerous experimental data on unidirectional pedestrian flow. In addition, we investigate the influence of corridor width and velocity distribution of crowds on unidirectional heterogeneous pedestrian flow. The simulation results reflect that widening corridors could increase the specific flow for the crowd composed of two kinds of pedestrians with significantly different free velocities. Moreover, compared with a unified crowd, the crowd composed of pedestrians with great mobility differences requires a wider corridor to attain the same traffic efficiency. This study could be beneficial in providing a better understanding of heterogeneous pedestrian flow, and quantified outcomes could be applied in traffic facility design.

  9. Estimation of mean-reverting oil prices: a laboratory approach

    International Nuclear Information System (INIS)

    Bjerksund, P.; Stensland, G.

    1993-12-01

    Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs

  10. Confidence interval of intrinsic optimum temperature estimated using thermodynamic SSI model

    Institute of Scientific and Technical Information of China (English)

    Takaya Ikemoto; Issei Kurahashi; Pei-Jian Shi

    2013-01-01

    The intrinsic optimum temperature for the development of ectotherms is one of the most important factors not only for their physiological processes but also for ecological and evolutional processes.The Sharpe-Schoolfield-Ikemoto (SSI) model succeeded in defining the temperature that can thermodynamically meet the condition that at a particular temperature the probability of an active enzyme reaching its maximum activity is realized.Previously,an algorithm was developed by Ikemoto (Tropical malaria does not mean hot environments.Journal of Medical Entomology,45,963-969) to estimate model parameters,but that program was computationally very time consuming.Now,investigators can use the SSI model more easily because a full automatic computer program was designed by Shi et al.(A modified program for estimating the parameters of the SSI model.Environmental Entomology,40,462-469).However,the statistical significance of the point estimate of the intrinsic optimum temperature for each ectotherm has not yet been determined.Here,we provided a new method for calculating the confidence interval of the estimated intrinsic optimum temperature by modifying the approximate bootstrap confidence intervals method.For this purpose,it was necessary to develop a new program for a faster estimation of the parameters in the SSI model,which we have also done.

  11. Estimation of real-time runway surface contamination using flight data recorder parameters

    Science.gov (United States)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the

  12. Testing the sensitivity of terrestrial carbon models using remotely sensed biomass estimates

    Science.gov (United States)

    Hashimoto, H.; Saatchi, S. S.; Meyer, V.; Milesi, C.; Wang, W.; Ganguly, S.; Zhang, G.; Nemani, R. R.

    2010-12-01

    There is a large uncertainty in carbon allocation and biomass accumulation in forest ecosystems. With the recent availability of remotely sensed biomass estimates, we now can test some of the hypotheses commonly implemented in various ecosystem models. We used biomass estimates derived by integrating MODIS, GLAS and PALSAR data to verify above-ground biomass estimates simulated by a number of ecosystem models (CASA, BIOME-BGC, BEAMS, LPJ). This study extends the hierarchical framework (Wang et al., 2010) for diagnosing ecosystem models by incorporating independent estimates of biomass for testing and calibrating respiration, carbon allocation, turn-over algorithms or parameters.

  13. Simplification of an MCNP model designed for dose rate estimation

    Science.gov (United States)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  14. Estimation in the positive stable shared frailty Cox proportional hazards model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Pipper, Christian Bressen

    2005-01-01

    model in situations where the correlated survival data show a decreasing association with time. In this paper, we devise a likelihood based estimation procedure for the positive stable shared frailty Cox model, which is expected to obtain high efficiency. The proposed estimator is provided with large...

  15. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang

    2014-02-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.

  16. Cutting Edge PBPK Models and Analyses: Providing the Basis for Future Modeling Efforts and Bridges to Emerging Toxicology Paradigms

    Directory of Open Access Journals (Sweden)

    Jane C. Caldwell

    2012-01-01

    Full Text Available Physiologically based Pharmacokinetic (PBPK models are used for predictions of internal or target dose from environmental and pharmacologic chemical exposures. Their use in human risk assessment is dependent on the nature of databases (animal or human used to develop and test them, and includes extrapolations across species, experimental paradigms, and determination of variability of response within human populations. Integration of state-of-the science PBPK modeling with emerging computational toxicology models is critical for extrapolation between in vitro exposures, in vivo physiologic exposure, whole organism responses, and long-term health outcomes. This special issue contains papers that can provide the basis for future modeling efforts and provide bridges to emerging toxicology paradigms. In this overview paper, we present an overview of the field and introduction for these papers that includes discussions of model development, best practices, risk-assessment applications of PBPK models, and limitations and bridges of modeling approaches for future applications. Specifically, issues addressed include: (a increased understanding of human variability of pharmacokinetics and pharmacodynamics in the population, (b exploration of mode of action hypotheses (MOA, (c application of biological modeling in the risk assessment of individual chemicals and chemical mixtures, and (d identification and discussion of uncertainties in the modeling process.

  17. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  18. Estimation of DSGE Models under Diffuse Priors and Data-Driven Identification Constraints

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem of multimo......We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem...... the properties of the estimation method, and shows how the problem of multimodal posterior distributions caused by parameter redundancy is eliminated by identification constraints. Out-of-sample forecast comparisons as well as Bayes factors lend support to the constrained model....

  19. On the real-time estimation of the wheel-rail contact force by means of a new nonlinear estimator design model

    Science.gov (United States)

    Strano, Salvatore; Terzo, Mario

    2018-05-01

    The dynamics of the railway vehicles is strongly influenced by the interaction between the wheel and the rail. This kind of contact is affected by several conditioning factors such as vehicle speed, wear, adhesion level and, moreover, it is nonlinear. As a consequence, the modelling and the observation of this kind of phenomenon are complex tasks but, at the same time, they constitute a fundamental step for the estimation of the adhesion level or for the vehicle condition monitoring. This paper presents a novel technique for the real time estimation of the wheel-rail contact forces based on an estimator design model that takes into account the nonlinearities of the interaction by means of a fitting model functional to reproduce the contact mechanics in a wide range of slip and to be easily integrated in a complete model based estimator for railway vehicle.

  20. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  1. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  2. M-Estimators of Roughness and Scale for -Modelled SAR Imagery

    Directory of Open Access Journals (Sweden)

    Frery Alejandro C

    2002-01-01

    Full Text Available The GA0 distribution is assumed as the universal model for multilook amplitude SAR imagery data under the multiplicative model. This distribution has two unknown parameters related to the roughness and the scale of the signal, that can be used in image analysis and processing. It can be seen that maximum likelihood and moment estimators for its parameters can be influenced by small percentages of "outliers"; hence, it is of outmost importance to find robust estimators for these parameters. One of the best-known classes of robust techniques is that of M-estimators, which are an extension of the maximum likelihood estimation method. In this work we derive the M-estimators for the parameters of the distribution, and compare them with maximum likelihood estimators with a Monte-Carlo experience. It is checked that this robust technique is superior to the classical approach under the presence of corner reflectors, a common source of contamination in SAR images. Numerical issues are addressed, and a practical example is provided.

  3. MODELS TO ESTIMATE BRAZILIAN INDIRECT TENSILE STRENGTH OF LIMESTONE IN SATURATED STATE

    Directory of Open Access Journals (Sweden)

    Zlatko Briševac

    2016-06-01

    Full Text Available There are a number of methods of estimating physical and mechanical characteristics. Principally, the most widely used is the regression, but recently the more sophisticated methods such as neural networks has frequently been applied, as well. This paper presents the models of a simple and a multiple regression and the neural networks – types Radial Basis Function and Multiple Layer Perceptron, which can be used for the estimate of the Brazilian indirect tensile strength in saturated conditions. The paper includes the issues of collecting the data for the analysis and modelling and the overview of the performed analysis of the efficacy assessment of the estimate of each model. After the assessment, the model which provides the best estimate was selected, including the model which could have the most wide-spread application in the engineering practice.

  4. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  5. Parameter estimation of electricity spot models from futures prices

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.

    We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate

  6. Simplification of an MCNP model designed for dose rate estimation

    Directory of Open Access Journals (Sweden)

    Laptev Alexander

    2017-01-01

    Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  7. Estimation of potential solar radiation using 50m grid digital terrain model

    International Nuclear Information System (INIS)

    Kurose, Y.; Nagata, K.; Ohba, K.; Maruyama, A.

    1999-01-01

    To clarify the spatial distribution of solar radiation, a model to estimate the potential incoming solar radiation with 50m grid size was developed. The model is based on individual calculation of direct and diffuse solar radiation accounting for the effect of topographic shading. Using the elevation data in the area with radius 25km, which was offered by the Digital Map 50m Grid, the effect of topographic shading is estimated as angle of elevation for surrounding configuration to 72 directions. The estimated sunshine duration under clear sky conditions agreed well with observed values at AMeDAS points of Kyushu and Shikoku region. Similarly, there is a significant agreement between estimated and observed variation of solar radiation for monthly mean conditions over complex terrain. These suggest that the potential incoming solar radiation can be estimated well over complex terrain using the model. Locations of large fields over complex terrain agreed well with the area of the abundant insolation condition, which is defined by the model. The model is available for the investigation of agrometeorological resources over complex terrain. (author)

  8. Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models

    Science.gov (United States)

    Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.

    2018-02-01

    In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.

  9. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  10. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  11. Soil Characterization at the Linde FUSRAP Site and the Impact on Soil Volume Estimates

    International Nuclear Information System (INIS)

    Boyle, J.; Kenna, T.; Pilon, R.

    2002-01-01

    The former Linde site in Tonawanda, New York is currently undergoing active remediation of Manhattan Engineering District's radiological contamination. This remediation is authorized under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The focus of this paper will be to describe the impact of soil characterization efforts as they relate to soil volume estimates and project cost estimates. An additional objective is to stimulate discussion about other characterization and modeling technologies, and to provide a ''Lessons Learned'' scenario to assist in future volume estimating at other FUSRAP sites. Initial soil characterization efforts at the Linde FUSRAP site in areas known to be contaminated or suspected to be contaminated were presented in the Remedial Investigation Report for the Tonawanda Site, dated February 1993. Results of those initial characterization efforts were the basis for soil volume estimates that were used to estimate and negotiate the current remediation contract. During the course of remediation, previously unidentified areas of contamination were discovered, and additional characterization was initiated. Additional test pit and geoprobe samples were obtained at over 500 locations, bringing the total to over 800 sample locations at the 135-acre site. New data continues to be collected on a routine basis during ongoing remedial actions

  12. A generic model for estimating biomass accumulation and greenhouse gas emissions from perennial crops

    Science.gov (United States)

    Ledo, Alicia; Heathcote, Richard; Hastings, Astley; Smith, Pete; Hillier, Jonathan

    2017-04-01

    Agriculture is essential to maintain humankind but is, at the same time, a substantial emitter of greenhouse gas (GHG) emissions. With a rising global population, the need for agriculture to provide secure food and energy supply is one of the main human challenges. At the same time, it is the only sector which has significant potential for negative emissions through the sequestration of carbon and offsetting via supply of feedstock for energy production. Perennial crops accumulate carbon during their lifetime and enhance organic soil carbon increase via root senescence and decomposition. However, inconsistency in accounting for this stored biomass undermines efforts to assess the benefits of such cropping systems when applied at scale. A consequence of this exclusion is that efforts to manage this important carbon stock are neglected. Detailed information on carbon balance is crucial to identify the main processes responsible for greenhouse gas emissions in order to develop strategic mitigation programs. Perennial crops systems represent 30% in area of total global crop systems, a considerable amount to be ignored. Furthermore, they have a major standing both in the bioenergy and global food industries. In this study, we first present a generic model to calculate the carbon balance and GHGs emissions from perennial crops, covering both food and bioenergy crops. The model is composed of two simple process-based sub-models, to cover perennial grasses and other perennial woody plants. The first is a generic individual based sub-model (IBM) covering crops in which the yield is the fruit and the plant biomass is an unharvested residue. Trees, shrubs and climbers fall into this category. The second model is a generic area based sub-model (ABM) covering perennial grasses, in which the harvested part includes some of the plant parts in which the carbon storage is accounted. Most second generation perennial bioenergy crops fall into this category. Both generic sub-models

  13. Negative binomial models for abundance estimation of multiple closed populations

    Science.gov (United States)

    Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.

    2001-01-01

    Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.

  14. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  15. Estimating cardiovascular disease incidence from prevalence: a spreadsheet based model

    Directory of Open Access Journals (Sweden)

    Xue Feng Hu

    2017-01-01

    Full Text Available Abstract Background Disease incidence and prevalence are both core indicators of population health. Incidence is generally not as readily accessible as prevalence. Cohort studies and electronic health record systems are two major way to estimate disease incidence. The former is time-consuming and expensive; the latter is not available in most developing countries. Alternatively, mathematical models could be used to estimate disease incidence from prevalence. Methods We proposed and validated a method to estimate the age-standardized incidence of cardiovascular disease (CVD, with prevalence data from successive surveys and mortality data from empirical studies. Hallett’s method designed for estimating HIV infections in Africa was modified to estimate the incidence of myocardial infarction (MI in the U.S. population and incidence of heart disease in the Canadian population. Results Model-derived estimates were in close agreement with observed incidence from cohort studies and population surveillance systems. This method correctly captured the trend in incidence given sufficient waves of cross-sectional surveys. The estimated MI declining rate in the U.S. population was in accordance with the literature. This method was superior to closed cohort, in terms of the estimating trend of population cardiovascular disease incidence. Conclusion It is possible to estimate CVD incidence accurately at the population level from cross-sectional prevalence data. This method has the potential to be used for age- and sex- specific incidence estimates, or to be expanded to other chronic conditions.

  16. Evapotranspiration Estimates for a Stochastic Soil-Moisture Model

    Science.gov (United States)

    Chaleeraktrakoon, Chavalit; Somsakun, Somrit

    2009-03-01

    Potential evapotranspiration is information that is necessary for applying a widely used stochastic model of soil moisture (I. Rodriguez Iturbe, A. Porporato, L. Ridolfi, V. Isham and D. R. Cox, Probabilistic modelling of water balance at a point: The role of climate, soil and vegetation, Proc. Roy. Soc. London A455 (1999) 3789-3805). An objective of the present paper is thus to find a proper estimate of the evapotranspiration for the stochastic model. This estimate is obtained by comparing the calculated soil-moisture distribution resulting from various techniques, such as Thornthwaite, Makkink, Jensen-Haise, FAO Modified Penman, and Blaney-Criddle, with an observed one. The comparison results using five sequences of daily soil-moisture for a dry season from November 2003 to April 2004 (Udornthani Province, Thailand) have indicated that all methods can be used if the weather information required is available. This is because their soil-moisture distributions are alike. In addition, the model is shown to have its ability in approximately describing the phenomenon at a weekly or biweekly time scale which is desirable for agricultural engineering applications.

  17. truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models

    Directory of Open Access Journals (Sweden)

    Maria Karlsson

    2014-05-01

    Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.

  18. Initial and final estimates of the Bilinear seasonal time series model ...

    African Journals Online (AJOL)

    In getting the estimates of the parameters of this model special attention was paid to the problem of having good initial estimates as it is proposed that with good initial values of the parameters the estimates obtaining by the Newton-Raphson iterative technique usually not only converge but also are good estimates.

  19. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  20. Developing a new solar radiation estimation model based on Buckingham theorem

    Science.gov (United States)

    Ekici, Can; Teke, Ismail

    2018-06-01

    While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.

  1. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  2. Estimating Small-Body Gravity Field from Shape Model and Navigation Data

    Science.gov (United States)

    Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam

    2008-01-01

    This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.

  3. Variation in the diversity and richness of parasitoid wasps based on sampling effort.

    Science.gov (United States)

    Saunders, Thomas E; Ward, Darren F

    2018-01-01

    Parasitoid wasps are a mega-diverse, ecologically dominant, but poorly studied component of global biodiversity. In order to maximise the efficiency and reduce the cost of their collection, the application of optimal sampling techniques is necessary. Two sites in Auckland, New Zealand were sampled intensively to determine the relationship between sampling effort and observed species richness of parasitoid wasps from the family Ichneumonidae. Twenty traps were deployed at each site at three different times over the austral summer period, resulting in a total sampling effort of 840 Malaise-trap-days. Rarefaction techniques and non-parametric estimators were used to predict species richness and to evaluate the variation and completeness of sampling. Despite an intensive Malaise-trapping regime over the summer period, no asymptote of species richness was reached. At best, sampling captured two-thirds of parasitoid wasp species present. The estimated total number of species present depended on the month of sampling and the statistical estimator used. Consequently, the use of fewer traps would have caught only a small proportion of all species (one trap 7-21%; two traps 13-32%), and many traps contributed little to the overall number of individuals caught. However, variation in the catch of individual Malaise traps was not explained by seasonal turnover of species, vegetation or environmental conditions surrounding the trap, or distance of traps to one another. Overall the results demonstrate that even with an intense sampling effort the community is incompletely sampled. The use of only a few traps and/or for very short periods severely limits the estimates of richness because (i) fewer individuals are caught leading to a greater number of singletons; and (ii) the considerable variation of individual traps means some traps will contribute few or no individuals. Understanding how sampling effort affects the richness and diversity of parasitoid wasps is a useful

  4. Comparison of modeled estimates of inhalation exposure to aerosols during use of consumer spray products.

    Science.gov (United States)

    Park, Jihoon; Yoon, Chungsik; Lee, Kiyoung

    2018-05-30

    In the field of exposure science, various exposure assessment models have been developed to complement experimental measurements; however, few studies have been published on their validity. This study compares the estimated inhaled aerosol doses of several inhalation exposure models to experimental measurements of aerosols released from consumer spray products, and then compares deposited doses within different parts of the human respiratory tract according to deposition models. Exposure models, including the European Center for Ecotoxicology of Chemicals Targeted Risk Assessment (ECETOC TRA), the Consumer Exposure Model (CEM), SprayExpo, ConsExpo Web and ConsExpo Nano, were used to estimate the inhaled dose under various exposure scenarios, and modeled and experimental estimates were compared. The deposited dose in different respiratory regions was estimated using the International Commission on Radiological Protection model and multiple-path particle dosimetry models under the assumption of polydispersed particles. The modeled estimates of the inhaled doses were accurate in the short term, i.e., within 10 min of the initial spraying, with a differences from experimental estimates ranging from 0 to 73% among the models. However, the estimates for long-term exposure, i.e., exposure times of several hours, deviated significantly from the experimental estimates in the absence of ventilation. The differences between the experimental and modeled estimates of particle number and surface area were constant over time under ventilated conditions. ConsExpo Nano, as a nano-scale model, showed stable estimates of short-term exposure, with a difference from the experimental estimates of less than 60% for all metrics. The deposited particle estimates were similar among the deposition models, particularly in the nanoparticle range for the head airway and alveolar regions. In conclusion, the results showed that the inhalation exposure models tested in this study are suitable

  5. A neuronal model of a global workspace in effortful cognitive tasks.

    Science.gov (United States)

    Dehaene, S; Kerszberg, M; Changeux, J P

    1998-11-24

    A minimal hypothesis is proposed concerning the brain processes underlying effortful tasks. It distinguishes two main computational spaces: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative, and attentional processors. Workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice. They selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously coactivated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. We outline predictions for spatio-temporal activation patterns during brain imaging, particularly about the contribution of dorsolateral prefrontal cortex and anterior cingulate to the workspace.

  6. Evaporation estimation of rift valley lakes: comparison of models.

    Science.gov (United States)

    Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe

    2009-01-01

    Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  7. Evaporation Estimation of Rift Valley Lakes: Comparison of Models

    Directory of Open Access Journals (Sweden)

    Tibebe Dessalegne

    2009-12-01

    Full Text Available Evapotranspiration (ET accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  8. Estimating Drilling Cost and Duration Using Copulas Dependencies Models

    Directory of Open Access Journals (Sweden)

    M. Al Kindi

    2017-03-01

    Full Text Available Estimation of drilling budget and duration is a high-level challenge for oil and gas industry. This is due to the many uncertain activities in the drilling procedure such as material prices, overhead cost, inflation, oil prices, well type, and depth of drilling. Therefore, it is essential to consider all these uncertain variables and the nature of relationships between them. This eventually leads into the minimization of the level of uncertainty and yet makes a "good" estimation points for budget and duration given the well type. In this paper, the copula probability theory is used in order to model the dependencies between cost/duration and MRI (mechanical risk index. The MRI is a mathematical computation, which relates various drilling factors such as: water depth, measured depth, true vertical depth in addition to mud weight and horizontal displacement. In general, the value of MRI is utilized as an input for the drilling cost and duration estimations. Therefore, modeling the uncertain dependencies between MRI and both cost and duration using copulas is important. The cost and duration estimates for each well were extracted from the copula dependency model where research study simulate over 10,000 scenarios. These new estimates were later compared to the actual data in order to validate the performance of the procedure. Most of the wells show moderate - weak relationship of MRI dependence, which means that the variation in these wells can be related to MRI but to the extent that it is not the primary source.

  9. A New Form of Nondestructive Strength-Estimating Statistical Models Accounting for Uncertainty of Model and Aging Effect of Concrete

    International Nuclear Information System (INIS)

    Hong, Kee Jeung; Kim, Jee Sang

    2009-01-01

    As concrete ages, the surrounding environment is expected to have growing influences on the concrete. As all the impacts of the environment cannot be considered in the strength-estimating model of a nondestructive concrete test, the increase in concrete age leads to growing uncertainty in the strength-estimating model. Therefore, the variation of the model error increases. It is necessary to include those impacts in the probability model of concrete strength attained from the nondestructive tests so as to build a more accurate reliability model for structural performance evaluation. This paper reviews and categorizes the existing strength-estimating statistical models of nondestructive concrete test, and suggests a new form of the strength-estimating statistical models to properly reflect the model uncertainty due to aging of the concrete. This new form of the statistical models will lay foundation for more accurate structural performance evaluation.

  10. Estimation of parameters of constant elasticity of substitution production functional model

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi

    2017-11-01

    Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.

  11. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  12. An Estimation of Construction and Demolition Debris in Seoul, Korea: Waste Amount, Type, and Estimating Model.

    Science.gov (United States)

    Seo, Seongwon; Hwang, Yongwoo

    1999-08-01

    Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.

  13. Are labour-intensive efforts to prevent pressure ulcers cost-effective?

    Science.gov (United States)

    Mathiesen, Anne Sofie Mølbak; Nørgaard, Kamilla; Andersen, Marie Frederikke Bruun; Møller, Klaus Meyer; Ehlers, Lars Holger

    2013-10-01

    Pressure ulcers are a major problem in Danish healthcare with a prevalence of 13-43% among hospitalized patients. The associated costs to the Danish Health Care Sector are estimated to be €174.5 million annually. In 2010, The Danish Society for Patient Safety introduced the Pressure Ulcer Bundle (PUB) in order to reduce hospital-acquired pressure ulcers by a minimum of 50% in five hospitals. The PUB consists of evidence-based preventive initiatives implemented by ward staff using the Model for Improvement. To investigate the cost-effectiveness of labour-intensive efforts to reduce pressure ulcers in the Danish Health Care Sector, comparing the PUB with standard care. A decision analytic model was constructed to assess the costs and consequences of hospital-acquired pressure ulcers during an average hospital admission in Denmark. The model inputs were based on a systematic review of clinical efficacy data combined with local cost and effectiveness data from the Thy-Mors Hospital, Denmark. A probabilistic sensitivity analysis (PSA) was conducted to assess the uncertainty. Prevention of hospital-acquired pressure ulcers by implementing labour-intensive effects according to the PUB was cost-saving and resulted in an improved effect compared to standard care. The incremental cost of the PUB was -€38.62. The incremental effects were a reduction of 9.3% prevented pressure ulcers and 0.47% prevented deaths. The PSAs confirmed the incremental cost-effectiveness ratio (ICER)'s dominance for both prevented pressure ulcers and saved lives with the PUB. This study shows that labour-intensive efforts to reduce pressure ulcers on hospital wards can be cost-effective and lead to savings in total costs of hospital and social care. The data included in the study regarding costs and effects of the PUB in Denmark were based on preliminary findings from a pilot study at Thy-Mors Hospital and literature.

  14. Model-based estimation with boundary side information or boundary regularization

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Fessler, J.A.; Clinthorne, N.H.; Hero, A.O.

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (Emission Computed Tomography). The authors have also reported difficulties with boundary estimation in low contrast and low count rate situations. In this paper, the authors propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, the authors introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. The authors implement boundary regularization through formulating a penalized log-likelihood function. The authors also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information

  15. Robust-BD Estimation and Inference for General Partially Linear Models

    Directory of Open Access Journals (Sweden)

    Chunming Zhang

    2017-11-01

    Full Text Available The classical quadratic loss for the partially linear model (PLM and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM, which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005 between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations.

  16. A practical model for pressure probe system response estimation (with review of existing models)

    Science.gov (United States)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  17. The irradiance and temperature dependent mathematical model for estimation of photovoltaic panel performances

    International Nuclear Information System (INIS)

    Barukčić, M.; Ćorluka, V.; Miklošević, K.

    2015-01-01

    Highlights: • The temperature and irradiance dependent model for the I–V curve estimation is presented. • The purely mathematical model based on the analysis of the I–V curve shape is presented. • The model includes the Gompertz function with temperature and irradiance dependent parameters. • The input data are extracted from the data sheet I–V curves. - Abstract: The temperature and irradiance dependent mathematical model for photovoltaic panel performances estimation is proposed in the paper. The base of the model is the mathematical function of the photovoltaic panel current–voltage curve. The model of the current–voltage curve is based on the sigmoid function with temperature and irradiance dependent parameters. The temperature and irradiance dependencies of the parameters are proposed in the form of analytic functions. The constant parameters are involved in the analytical functions. The constant parameters need to be estimated to get the temperature and irradiance dependent current–voltage curve. The mathematical model contains 12 constant parameters and they are estimated by using the evolutionary algorithm. The optimization problem is defined for this purpose. The optimization problem objective function is based on estimated and extracted (measured) current and voltage values. The current and voltage values are extracted from current–voltage curves given in datasheet of the photovoltaic panels. The new procedure for estimation of open circuit voltage value at any temperature and irradiance is proposed in the model. The performance of the proposed mathematical model is presented for three different photovoltaic panel technologies. The simulation results indicate that the proposed mathematical model is acceptable for estimation of temperature and irradiance dependent current–voltage curve and photovoltaic panel performances within temperature and irradiance ranges

  18. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  19. Performance of monitoring networks estimated from a Gaussian plume model

    International Nuclear Information System (INIS)

    Seebregts, A.J.; Hienen, J.F.A.

    1990-10-01

    In support of the ECN study on monitoring strategies after nuclear accidents, the present report describes the analysis of the performance of a monitoring network in a square grid. This network is used to estimate the distribution of the deposition pattern after a release of radioactivity into the atmosphere. The analysis is based upon a single release, a constant wind direction and an atmospheric dispersion according to a simplified Gaussian plume model. A technique is introduced to estimate the parameters in this Gaussian model based upon measurements at specific monitoring locations and linear regression, although this model is intrinsically non-linear. With these estimated parameters and the Gaussian model the distribution of the contamination due to deposition can be estimated. To investigate the relation between the network and the accuracy of the estimates for the deposition, deposition data have been generated by the Gaussian model, including a measurement error by a Monte Carlo simulation and this procedure has been repeated for several grid sizes, dispersion conditions, number of measurements per location, and errors per single measurement. The present technique has also been applied for the mesh sizes of two networks in the Netherlands, viz. the Landelijk Meetnet Radioaciviteit (National Measurement Network on Radioactivity, mesh size approx. 35 km) and the proposed Landelijk Meetnet Nucleaire Incidenten (National Measurement Network on Nuclear Incidents, mesh size approx. 15 km). The results show accuracies of 11 and 7 percent, respectively, if monitoring locations are used more than 10 km away from the postulated accident site. These figures are based upon 3 measurements per location and a dispersion during neutral weather with a wind velocity of 4 m/s. For stable weather conditions and low wind velocities, i.e. a small plume, the calculated accuracies are at least a factor 1.5 worse.The present type of analysis makes a cost-benefit approach to the

  20. Cost-Sensitive Estimation of ARMA Models for Financial Asset Return Data

    Directory of Open Access Journals (Sweden)

    Minyoung Kim

    2015-01-01

    Full Text Available The autoregressive moving average (ARMA model is a simple but powerful model in financial engineering to represent time-series with long-range statistical dependency. However, the traditional maximum likelihood (ML estimator aims to minimize a loss function that is inherently symmetric due to Gaussianity. The consequence is that when the data of interest are asset returns, and the main goal is to maximize profit by accurate forecasting, the ML objective may be less appropriate potentially leading to a suboptimal solution. Rather, it is more reasonable to adopt an asymmetric loss where the model's prediction, as long as it is in the same direction as the true return, is penalized less than the prediction in the opposite direction. We propose a quite sensible asymmetric cost-sensitive loss function and incorporate it into the ARMA model estimation. On the online portfolio selection problem with real stock return data, we demonstrate that the investment strategy based on predictions by the proposed estimator can be significantly more profitable than the traditional ML estimator.

  1. Estimation of Dynamic Panel Data Models with Stochastic Volatility Using Particle Filters

    Directory of Open Access Journals (Sweden)

    Wen Xu

    2016-10-01

    Full Text Available Time-varying volatility is common in macroeconomic data and has been incorporated into macroeconomic models in recent work. Dynamic panel data models have become increasingly popular in macroeconomics to study common relationships across countries or regions. This paper estimates dynamic panel data models with stochastic volatility by maximizing an approximate likelihood obtained via Rao-Blackwellized particle filters. Monte Carlo studies reveal the good and stable performance of our particle filter-based estimator. When the volatility of volatility is high, or when regressors are absent but stochastic volatility exists, our approach can be better than the maximum likelihood estimator which neglects stochastic volatility and generalized method of moments (GMM estimators.

  2. Comparison of prospective risk estimates for postoperative complications: human vs computer model.

    Science.gov (United States)

    Glasgow, Robert E; Hawn, Mary T; Hosokawa, Patrick W; Henderson, William G; Min, Sung-Joon; Richman, Joshua S; Tomeh, Majed G; Campbell, Darrell; Neumayer, Leigh A

    2014-02-01

    Surgical quality improvement tools such as NSQIP are limited in their ability to prospectively affect individual patient care by the retrospective audit and feedback nature of their design. We hypothesized that statistical models using patient preoperative characteristics could prospectively provide risk estimates of postoperative adverse events comparable to risk estimates provided by experienced surgeons, and could be useful for stratifying preoperative assessment of patient risk. This was a prospective observational cohort. Using previously developed models for 30-day postoperative mortality, overall morbidity, cardiac, thromboembolic, pulmonary, renal, and surgical site infection (SSI) complications, model and surgeon estimates of risk were compared with each other and with actual 30-day outcomes. The study cohort included 1,791 general surgery patients operated on between June 2010 and January 2012. Observed outcomes were mortality (0.2%), overall morbidity (8.2%), and pulmonary (1.3%), cardiac (0.3%), thromboembolism (0.2%), renal (0.4%), and SSI (3.8%) complications. Model and surgeon risk estimates showed significant correlation (p risk for overall morbidity to be low, the model-predicted risk and observed morbidity rates were 2.8% and 4.1%, respectively, compared with 10% and 18% in perceived high risk patients. Patients in the highest quartile of model-predicted risk accounted for 75% of observed mortality and 52% of morbidity. Across a broad range of general surgical operations, we confirmed that the model risk estimates are in fairly good agreement with risk estimates of experienced surgeons. Using these models prospectively can identify patients at high risk for morbidity and mortality, who could then be targeted for intervention to reduce postoperative complications. Published by Elsevier Inc.

  3. Model for Estimation of Fuel Consumption of Cruise Ships

    Directory of Open Access Journals (Sweden)

    Morten Simonsen

    2018-04-01

    Full Text Available This article presents a model to estimate the energy use and fuel consumption of cruise ships that sail Norwegian waters. Automatic identification system (AIS data and technical information about cruise ships provided input to the model, including service speed, total power, and number of engines. The model was tested against real-world data obtained from a small cruise vessel and both a medium and large cruise ship. It is sensitive to speed and the corresponding engine load profile of the ship. A crucial determinate for total fuel consumption is also associated with hotel functions, which can make a large contribution to the overall energy use of cruise ships. Real-world data fits the model best when ship speed is 70–75% of service speed. With decreased or increased speed, the model tends to diverge from real-world observations. The model gives a proxy for calculation of fuel consumption associated with cruise ships that sail to Norwegian waters and can be used to estimate greenhouse gas emissions and to evaluate energy reduction strategies for cruise ships.

  4. Non-destructive linear model for leaf area estimation in Vernonia ferruginea Less

    Directory of Open Access Journals (Sweden)

    MC. Souza

    Full Text Available Leaf area estimation is an important biometrical trait for evaluating leaf development and plant growth in field and pot experiments. We developed a non-destructive model to estimate the leaf area (LA of Vernonia ferruginea using the length (L and width (W leaf dimensions. Different combinations of linear equations were obtained from L, L2, W, W2, LW and L2W2. The linear regressions using the product of LW dimensions were more efficient to estimate the LA of V. ferruginea than models based on a single dimension (L, W, L2 or W2. Therefore, the linear regression “LA=0.463+0.676WL” provided the most accurate estimate of V. ferruginea leaf area. Validation of the selected model showed that the correlation between real measured leaf area and estimated leaf area was very high.

  5. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

    Directory of Open Access Journals (Sweden)

    Kese Pontes Freitas Alberton

    2015-01-01

    Full Text Available This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available.

  6. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V.

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  7. PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.

    Science.gov (United States)

    Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude

    2015-04-27

    Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.

  8. Estimation of MIMO channel capacity from phase-noise impaired measurements

    DEFF Research Database (Denmark)

    Pedersen, Troels; Yin, Xuefeng; Fleury, Bernard Henri

    2008-01-01

    Due to the significantly reduced cost and effort for system calibration time-division multiplexing (TDM) is a commonly used technique to switch between the transmit and receive antennas in multiple-input multiple-output (MIMO) radio channel sounding. Nonetheless, Baum et al. [1], [2] have shown t...... matrix. It is shown by means of Monte Carlo simulations assuming a measurementbased phase noise model, that the MIMO channel capacity can be estimated accurately for signal to noise ratios up to about 35 dB......Due to the significantly reduced cost and effort for system calibration time-division multiplexing (TDM) is a commonly used technique to switch between the transmit and receive antennas in multiple-input multiple-output (MIMO) radio channel sounding. Nonetheless, Baum et al. [1], [2] have shown...... that phase noise of the transmitter and receiver local oscillators, when it is assumed to be a white Gaussian random process, can cause large errors of the estimated channel capacity of a low-rank MIMO channel when the standard channel matrix estimator is used. Experimental evidence shows that consecutive...

  9. Health Promotion Efforts as Predictors of Physical Activity in Schools: An Application of the Diffusion of Innovations Model

    Science.gov (United States)

    Glowacki, Elizabeth M.; Centeio, Erin E.; Van Dongen, Daniel J.; Carson, Russell L.; Castelli, Darla M.

    2016-01-01

    Background: Implementing a comprehensive school physical activity program (CSPAP) effectively addresses public health issues by providing opportunities for physical activity (PA). Grounded in the Diffusion of Innovations model, the purpose of this study was to identify how health promotion efforts facilitate opportunities for PA. Methods: Physical…

  10. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  11. Urban scale air quality modelling using detailed traffic emissions estimates

    Science.gov (United States)

    Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.

    2016-04-01

    The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.

  12. Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process

    Science.gov (United States)

    Nakanishi, W.; Fuse, T.; Ishikawa, T.

    2015-05-01

    This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.

  13. A PSO–GA optimal model to estimate primary energy demand of China

    International Nuclear Information System (INIS)

    Yu Shiwei; Wei Yiming; Wang Ke

    2012-01-01

    To improve estimation efficiency for future projections, the present study has proposed a hybrid algorithm, Particle Swarm Optimization and Genetic Algorithm optimal Energy Demand Estimating (PSO–GA EDE) model, for China. The coefficients of the three forms of the model (linear, exponential, and quadratic) are optimized by PSO–GA using factors, such as GDP, population, economic structure, urbanization rate, and energy consumption structure, that affect demand. Based on 20-year historical data between 1990 and 2009, the simulation results of the proposed model have greater accuracy and reliability than other single optimization methods. Moreover, it can be used with optimal coefficients for the energy demand projections of China. The departure coefficient method is applied to get the weights of the three forms of the model to obtain a combinational prediction. The energy demand of China is going to be 4.79, 4.04, and 4.48 billion tce in 2015, and 6.91, 5.03, and 6.11 billion tce (“standard” tons coal equivalent) in 2020 under three different scenarios. Further, the projection results are compared with other estimating methods. - Highlights: ► A hybrid algorithm PSO–GA optimal energy demands estimating model for China. ► Energy demand of China is estimated by 2020 in three different scenarios. ► The projection results are compared with other estimating methods.

  14. Bayesian analysis for uncertainty estimation of a canopy transpiration model

    Science.gov (United States)

    Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.

    2007-04-01

    A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.

  15. Modeling, estimation and optimal filtration in signal processing

    CERN Document Server

    Najim, Mohamed

    2010-01-01

    The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the

  16. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  17. Markov models for digraph panel data : Monte Carlo-based derivative estimation

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A. B.

    2007-01-01

    A parametric, continuous-time Markov model for digraph panel data is considered. The parameter is estimated by the method of moments. A convenient method for estimating the variance-covariance matrix of the moment estimator relies on the delta method, requiring the Jacobian matrix-that is, the

  18. Estimating the Competitive Storage Model with Trending Commodity Prices

    OpenAIRE

    Gouel , Christophe; LEGRAND , Nicolas

    2017-01-01

    We present a method to estimate jointly the parameters of a standard commodity storage model and the parameters characterizing the trend in commodity prices. This procedure allows the influence of a possible trend to be removed without restricting the model specification, and allows model and trend selection based on statistical criteria. The trend is modeled deterministically using linear or cubic spline functions of time. The results show that storage models with trend are always preferred ...

  19. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  20. Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5

    Directory of Open Access Journals (Sweden)

    D. Wang

    2017-07-01

    Full Text Available Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015 proposed a method incorporating satellite-based evapotranspiration (ET products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5 and test its performance over the conterminous US (CONUS. We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015 method and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.

  1. Targeting estimation of CCC-GARCH models with infinite fourth moments

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    . In this paper we consider the large-sample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using non-standard limit theory we derive new results...... for the estimator stating that its limiting distribution is multivariate stable. The rate of consistency of the estimator is slower than √Τ (as obtained by the quasi-maximum likelihood estimator) and depends on the tails of the data generating process....

  2. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  3. A rapid estimation of tsunami run-up based on finite fault models

    Science.gov (United States)

    Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.

    2014-12-01

    Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.

  4. Exploratory Long-Range Models to Estimate Summer Climate Variability over Southern Africa.

    Science.gov (United States)

    Jury, Mark R.; Mulenga, Henry M.; Mason, Simon J.

    1999-07-01

    Teleconnection predictors are explored using multivariate regression models in an effort to estimate southern African summer rainfall and climate impacts one season in advance. The preliminary statistical formulations include many variables influenced by the El Niño-Southern Oscillation (ENSO) such as tropical sea surface temperatures (SST) in the Indian and Atlantic Oceans. Atmospheric circulation responses to ENSO include the alternation of tropical zonal winds over Africa and changes in convective activity within oceanic monsoon troughs. Numerous hemispheric-scale datasets are employed to extract predictors and include global indexes (Southern Oscillation index and quasi-biennial oscillation), SST principal component scores for the global oceans, indexes of tropical convection (outgoing longwave radiation), air pressure, and surface and upper winds over the Indian and Atlantic Oceans. Climatic targets include subseasonal, area-averaged rainfall over South Africa and the Zambezi river basin, and South Africa's annual maize yield. Predictors and targets overlap in the years 1971-93, the defined training period. Each target time series is fitted by an optimum group of predictors from the preceding spring, in a linear multivariate formulation. To limit artificial skill, predictors are restricted to three, providing 17 degrees of freedom. Models with colinear predictors are screened out, and persistence of the target time series is considered. The late summer rainfall models achieve a mean r2 fit of 72%, contributed largely through ENSO modulation. Early summer rainfall cross validation correlations are lower (61%). A conceptual understanding of the climate dynamics and ocean-atmosphere coupling processes inherent in the exploratory models is outlined.Seasonal outlooks based on the exploratory models could help mitigate the impacts of southern Africa's fluctuating climate. It is believed that an advance warning of drought risk and seasonal rainfall prospects will

  5. Detectability of migrating raptors and its effect on bias and precision of trend estimates

    Directory of Open Access Journals (Sweden)

    Eric G. Nolte

    2016-12-01

    Full Text Available Annual counts of migrating raptors at fixed observation points are a widespread practice, and changes in numbers counted over time, adjusted for survey effort, are commonly used as indices of trends in population size. Unmodeled year-to-year variation in detectability may introduce bias, reduce precision of trend estimates, and reduce power to detect trends. We conducted dependent double-observer surveys at the annual fall raptor migration count at Lucky Peak, Idaho, in 2009 and 2010 and applied Huggins closed-capture removal models and information-theoretic model selection to determine the relative importance of factors affecting detectability. The most parsimonious model included effects of observer team identity, distance, species, and day of the season. We then simulated 30 years of counts with heterogeneous individual detectability, a population decline (λ = 0.964, and unexplained random variation in the number of available birds. Imperfect detectability did not bias trend estimation, and increased the time required to achieve 80% power by less than 11%. Results suggested that availability is a greater source of variance in annual counts than detectability; thus, efforts to account for availability would improve the monitoring value of migration counts. According to our models, long-term trends in observer efficiency or migratory flight distance may introduce substantial bias to trend estimates. Estimating detectability with a novel count protocol like our double-observer method is just one potential means of controlling such effects. The traditional approach of modeling the effects of covariates and adjusting the index may also be effective if ancillary data is collected consistently.

  6. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    Science.gov (United States)

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only

  7. Limited information estimation of the diffusion-based item response theory model for responses and response times.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2016-05-01

    Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.

  8. Transient Inverse Calibration of Site-Wide Groundwater Model to Hanford Operational Impacts from 1943 to 1996-Alternative Conceptual Model Considering Interaction with Uppermost Basalt Confined Aquifer; FINAL

    International Nuclear Information System (INIS)

    Vermeul, Vince R; Cole, Charles R; Bergeron, Marcel P; Thorne, Paul D; Wurstner, Signe K

    2001-01-01

    The baseline three-dimensional transient inverse model for the estimation of site-wide scale flow parameters, including their uncertainties, using data on the transient behavior of the unconfined aquifer system over the entire historical period of Hanford operations, has been modified to account for the effects of basalt intercommunication between the Hanford unconfined aquifer and the underlying upper basalt confined aquifer. Both the baseline and alternative conceptual models (ACM-1) considered only the groundwater flow component and corresponding observational data in the 3-Dl transient inverse calibration efforts. Subsequent efforts will examine both groundwater flow and transport. Comparisons of goodness of fit measures and parameter estimation results for the ACM-1 transient inverse calibrated model with those from previous site-wide groundwater modeling efforts illustrate that the new 3-D transient inverse model approach will strengthen the technical defensibility of the final model(s) and provide the ability to incorporate uncertainty in predictions related to both conceptual model and parameter uncertainty

  9. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  10. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  11. Teach it Yourself - Fast Modeling of Industrial Objects for 6D Pose Estimation

    DEFF Research Database (Denmark)

    Sølund, Thomas; Rajeeth Savarimuthu, Thiusius; Glent Buch, Anders

    2015-01-01

    In this paper, we present a vision system that allows a human to create new 3D models of novel industrial parts by placing the part in two different positions in the scene. The two shot modeling framework generates models with a precision that allows the model to be used for 6D pose estimation wi....... In addition, the models are applied in a pose estimation application, evaluated with 37 different scenes with 61 unique object poses. The pose estimation results show a mean translation error on 4.97 mm and a mean rotation error on 3.38 degrees....

  12. Model for traffic emissions estimation

    Science.gov (United States)

    Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.

    A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.

  13. A model-based approach to estimating forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2006-01-01

    A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...

  14. The effects of savings on reservation wages and search effort

    NARCIS (Netherlands)

    Lammers, M.

    2014-01-01

    This paper discusses the interrelations among wealth, reservation wages and search effort. A theoretical job search model predicts wealth to affect reservation wages positively, and search effort negatively. Subsequently, reduced form equations for reservation wages and search intensity take these

  15. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  16. Estimation of the Thurstonian model for the 2-AC protocol

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Lee, Hye-Seong; Brockhoff, Per B.

    2012-01-01

    . This relationship makes it possible to extract estimates and standard errors of δ and τ from general statistical software, and furthermore, it makes it possible to combine standard regression modelling with the Thurstonian model for the 2-AC protocol. A model for replicated 2-AC data is proposed using cumulative......The 2-AC protocol is a 2-AFC protocol with a “no-difference” option and is technically identical to the paired preference test with a “no-preference” option. The Thurstonian model for the 2-AC protocol is parameterized by δ and a decision parameter τ, the estimates of which can be obtained...... by fairly simple well-known methods. In this paper we describe how standard errors of the parameters can be obtained and how exact power computations can be performed. We also show how the Thurstonian model for the 2-AC protocol is closely related to a statistical model known as a cumulative probit model...

  17. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  18. Dual states estimation of a subsurface flow-transport coupled model using ensemble Kalman filtering

    KAUST Repository

    El Gharamti, Mohamad

    2013-10-01

    Modeling the spread of subsurface contaminants requires coupling a groundwater flow model with a contaminant transport model. Such coupling may provide accurate estimates of future subsurface hydrologic states if essential flow and contaminant data are assimilated in the model. Assuming perfect flow, an ensemble Kalman filter (EnKF) can be used for direct data assimilation into the transport model. This is, however, a crude assumption as flow models can be subject to many sources of uncertainty. If the flow is not accurately simulated, contaminant predictions will likely be inaccurate even after successive Kalman updates of the contaminant model with the data. The problem is better handled when both flow and contaminant states are concurrently estimated using the traditional joint state augmentation approach. In this paper, we introduce a dual estimation strategy for data assimilation into a one-way coupled system by treating the flow and the contaminant models separately while intertwining a pair of distinct EnKFs, one for each model. The presented strategy only deals with the estimation of state variables but it can also be used for state and parameter estimation problems. This EnKF-based dual state-state estimation procedure presents a number of novel features: (i) it allows for simultaneous estimation of both flow and contaminant states in parallel; (ii) it provides a time consistent sequential updating scheme between the two models (first flow, then transport); (iii) it simplifies the implementation of the filtering system; and (iv) it yields more stable and accurate solutions than does the standard joint approach. We conducted synthetic numerical experiments based on various time stepping and observation strategies to evaluate the dual EnKF approach and compare its performance with the joint state augmentation approach. Experimental results show that on average, the dual strategy could reduce the estimation error of the coupled states by 15% compared with the

  19. The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation

    Science.gov (United States)

    Felder, Guido; Zischg, Andreas; Weingartner, Rolf

    2017-07-01

    Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.

  20. Estimation of spatial uncertainties of tomographic velocity models

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, M.; Du, Z.; Querendez, E. [SINTEF Petroleum Research, Trondheim (Norway)

    2012-12-15

    This research project aims to evaluate the possibility of assessing the spatial uncertainties in tomographic velocity model building in a quantitative way. The project is intended to serve as a test of whether accurate and specific uncertainty estimates (e.g., in meters) can be obtained. The project is based on Monte Carlo-type perturbations of the velocity model as obtained from the tomographic inversion guided by diagonal and off-diagonal elements of the resolution and the covariance matrices. The implementation and testing of this method was based on the SINTEF in-house stereotomography code, using small synthetic 2D data sets. To test the method the calculation and output of the covariance and resolution matrices was implemented, and software to perform the error estimation was created. The work included the creation of 2D synthetic data sets, the implementation and testing of the software to conduct the tests (output of the covariance and resolution matrices which are not implicitly provided by stereotomography), application to synthetic data sets, analysis of the test results, and creating the final report. The results show that this method can be used to estimate the spatial errors in tomographic images quantitatively. The results agree with' the known errors for our synthetic models. However, the method can only be applied to structures in the model, where the change of seismic velocity is larger than the predicted error of the velocity parameter amplitudes. In addition, the analysis is dependent on the tomographic method, e.g., regularization and parameterization. The conducted tests were very successful and we believe that this method could be developed further to be applied to third party tomographic images.