WorldWideScience

Sample records for constructing predictive estimates

  1. Estimating construction and demolition debris generation using a materials flow analysis approach.

    Science.gov (United States)

    Cochran, K M; Townsend, T G

    2010-11-01

    The magnitude and composition of a region's construction and demolition (C&D) debris should be understood when developing rules, policies and strategies for managing this segment of the solid waste stream. In the US, several national estimates have been conducted using a weight-per-construction-area approximation; national estimates using alternative procedures such as those used for other segments of the solid waste stream have not been reported for C&D debris. This paper presents an evaluation of a materials flow analysis (MFA) approach for estimating C&D debris generation and composition for a large region (the US). The consumption of construction materials in the US and typical waste factors used for construction materials purchasing were used to estimate the mass of solid waste generated as a result of construction activities. Debris from demolition activities was predicted from various historical construction materials consumption data and estimates of average service lives of the materials. The MFA approach estimated that approximately 610-78 × 10(6)Mg of C&D debris was generated in 2002. This predicted mass exceeds previous estimates using other C&D debris predictive methodologies and reflects the large waste stream that exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. A novel methodology to estimate the evolution of construction waste in construction sites.

    Science.gov (United States)

    Katz, Amnon; Baum, Hadassa

    2011-02-01

    This paper focuses on the accumulation of construction waste generated throughout the erection of new residential buildings. A special methodology was developed in order to provide a model that will predict the flow of construction waste. The amount of waste and its constituents, produced on 10 relatively large construction sites (7000-32,000 m(2) of built area) was monitored periodically for a limited time. A model that predicts the accumulation of construction waste was developed based on these field observations. According to the model, waste accumulates in an exponential manner, i.e. smaller amounts are generated during the early stages of construction and increasing amounts are generated towards the end of the project. The total amount of waste from these sites was estimated at 0.2m(3) per 1m(2) floor area. A good correlation was found between the model predictions and actual data from the field survey. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Predictable grammatical constructions

    DEFF Research Database (Denmark)

    Lucas, Sandra

    2015-01-01

    My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting that these p......My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting...... that these predictable units should be considered grammatical constructions on a par with the nonpredictable constructions. Frequency has usually been seen as the only possible argument speaking in favor of viewing some formally and semantically fully predictable units as grammatical constructions. However, this paper...... semantically and formally predictable. Despite this difference, [méllo INF], like the other future periphrases, seems to be highly entrenched in the cognition (and grammar) of Early Medieval Greek language users, and consequently a grammatical construction. The syntactic evidence speaking in favor of [méllo...

  4. Three procedures for estimating erosion from construction areas

    International Nuclear Information System (INIS)

    Abt, S.R.; Ruff, J.F.

    1978-01-01

    Erosion from many mining and construction sites can lead to serious environmental pollution problems. Therefore, erosion management plans must be developed in order that the engineer may implement measures to control or eliminate excessive soil losses. To properly implement a management program, it is necessary to estimate potential soil losses from the time the project begins to beyond project completion. Three methodologies are presented which project the estimated soil losses due to sheet or rill erosion of water and are applicable to mining and construction areas. Furthermore, the three methods described are intended as indicators of the state-of-the-art in water erosion prediction. The procedures herein do not account for gully erosion, snowmelt erosion, wind erosion, freeze-thaw erosion or extensive flooding

  5. ANN Based Approach for Estimation of Construction Costs of Sports Fields

    Directory of Open Access Journals (Sweden)

    Michał Juszczyk

    2018-01-01

    Full Text Available Cost estimates are essential for the success of construction projects. Neural networks, as the tools of artificial intelligence, offer a significant potential in this field. Applying neural networks, however, requires respective studies due to the specifics of different kinds of facilities. This paper presents the proposal of an approach to the estimation of construction costs of sports fields which is based on neural networks. The general applicability of artificial neural networks in the formulated problem with cost estimation is investigated. An applicability of multilayer perceptron networks is confirmed by the results of the initial training of a set of various artificial neural networks. Moreover, one network was tailored for mapping a relationship between the total cost of construction works and the selected cost predictors which are characteristic of sports fields. Its prediction quality and accuracy were assessed positively. The research results legitimatize the proposed approach.

  6. Constructing Predictive Estimates for Worker Exposure to Radioactivity During Decommissioning: Analysis of Completed Decommissioning Projects - Master Thesis

    Energy Technology Data Exchange (ETDEWEB)

    Dettmers, Dana Lee; Eide, Steven Arvid

    2002-10-01

    An analysis of completed decommissioning projects is used to construct predictive estimates for worker exposure to radioactivity during decommissioning activities. The preferred organizational method for the completed decommissioning project data is to divide the data by type of facility, whether decommissioning was performed on part of the facility or the complete facility, and the level of radiation within the facility prior to decommissioning (low, medium, or high). Additional data analysis shows that there is not a downward trend in worker exposure data over time. Also, the use of a standard estimate for worker exposure to radioactivity may be a best estimate for low complete storage, high partial storage, and medium reactor facilities; a conservative estimate for some low level of facility radiation facilities (reactor complete, research complete, pits/ponds, other), medium partial process facilities, and high complete research facilities; and an underestimate for the remaining facilities. Limited data are available to compare different decommissioning alternatives, so the available data are reported and no conclusions can been drawn. It is recommended that all DOE sites and the NRC use a similar method to document worker hours, worker exposure to radiation (person-rem), and standard industrial accidents, injuries, and deaths for all completed decommissioning activities.

  7. CONSTRUCTION COST PREDICTION USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Smita K Magdum

    2017-10-01

    Full Text Available Construction cost prediction is important for construction firms to compete and grow in the industry. Accurate construction cost prediction in the early stage of project is important for project feasibility studies and successful completion. There are many factors that affect the cost prediction. This paper presents construction cost prediction as multiple regression model with cost of six materials as independent variables. The objective of this paper is to develop neural networks and multilayer perceptron based model for construction cost prediction. Different models of NN and MLP are developed with varying hidden layer size and hidden nodes. Four artificial neural network models and twelve multilayer perceptron models are compared. MLP and NN give better results than statistical regression method. As compared to NN, MLP works better on training dataset but fails on testing dataset. Five activation functions are tested to identify suitable function for the problem. ‘elu' transfer function gives better results than other transfer function.

  8. CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE

    Directory of Open Access Journals (Sweden)

    Nino Serdarevic

    2012-10-01

    Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.

  9. Construction of the Calibration Set through Multivariate Analysis in Visible and Near-Infrared Prediction Model for Estimating Soil Organic Matter

    Directory of Open Access Journals (Sweden)

    Xiaomi Wang

    2017-02-01

    Full Text Available The visible and near-infrared (VNIR spectroscopy prediction model is an effective tool for the prediction of soil organic matter (SOM content. The predictive accuracy of the VNIR model is highly dependent on the selection of the calibration set. However, conventional methods for selecting the calibration set for constructing the VNIR prediction model merely consider either the gradients of SOM or the soil VNIR spectra and neglect the influence of environmental variables. However, soil samples generally present a strong spatial variability, and, thus, the relationship between the SOM content and VNIR spectra may vary with respect to locations and surrounding environments. Hence, VNIR prediction models based on conventional calibration set selection methods would be biased, especially for estimating highly spatially variable soil content (e.g., SOM. To equip the calibration set selection method with the ability to consider SOM spatial variation and environmental influence, this paper proposes an improved method for selecting the calibration set. The proposed method combines the improved multi-variable association relationship clustering mining (MVARC method and the Rank–Kennard–Stone (Rank-KS method in order to synthetically consider the SOM gradient, spectral information, and environmental variables. In the proposed MVARC-R-KS method, MVARC integrates the Apriori algorithm, a density-based clustering algorithm, and the Delaunay triangulation. The MVARC method is first utilized to adaptively mine clustering distribution zones in which environmental variables exert a similar influence on soil samples. The feasibility of the MVARC method is proven by conducting an experiment on a simulated dataset. The calibration set is evenly selected from the clustering zones and the remaining zone by using the Rank-KS algorithm in order to avoid a single property in the selected calibration set. The proposed MVARC-R-KS approach is applied to select a

  10. Early cost estimating for road construction projects using multiple regression techniques

    Directory of Open Access Journals (Sweden)

    Ibrahim Mahamid

    2011-12-01

    Full Text Available The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.

  11. A Duration Prediction Using a Material-Based Progress Management Methodology for Construction Operation Plans

    Directory of Open Access Journals (Sweden)

    Yongho Ko

    2017-04-01

    Full Text Available Precise and accurate prediction models for duration and cost enable contractors to improve their decision making for effective resource management in terms of sustainability in construction. Previous studies have been limited to cost-based estimations, but this study focuses on a material-based progress management method. Cost-based estimations typically used in construction, such as the earned value method, rely on comparing the planned budget with the actual cost. However, accurately planning budgets requires analysis of many factors, such as the financial status of the sectors involved. Furthermore, there is a higher possibility of changes in the budget than in the total amount of material used during construction, which is deduced from the quantity take-off from drawings and specifications. Accordingly, this study proposes a material-based progress management methodology, which was developed using different predictive analysis models (regression, neural network, and auto-regressive moving average as well as datasets on material and labor, which can be extracted from daily work reports from contractors. A case study on actual datasets was conducted, and the results show that the proposed methodology can be efficiently used for progress management in construction.

  12. PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.

    Science.gov (United States)

    Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude

    2015-04-27

    Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.

  13. Probabilistic prediction of expected ground condition and construction time and costs in road tunnels

    Directory of Open Access Journals (Sweden)

    A. Mahmoodzadeh

    2016-10-01

    Full Text Available Ground condition and construction (excavation and support time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.

  14. Development of a simple estimation tool for LMFBR construction cost

    International Nuclear Information System (INIS)

    Yoshida, Kazuo; Kinoshita, Izumi

    1999-01-01

    A simple tool for estimating the construction costs of liquid-metal-cooled fast breeder reactors (LMFBRs), 'Simple Cost' was developed in this study. Simple Cost is based on a new estimation formula that can reduce the amount of design data required to estimate construction costs. Consequently, Simple cost can be used to estimate the construction costs of innovative LMFBR concepts for which detailed design has not been carried out. The results of test calculation show that Simple Cost provides cost estimations equivalent to those obtained with conventional methods within the range of plant power from 325 to 1500 MWe. Sensitivity analyses for typical design parameters were conducted using Simple Cost. The effects of four major parameters - reactor vessel diameter, core outlet temperature, sodium handling area and number of secondary loops - on the construction costs of LMFBRs were evaluated quantitatively. The results show that the reduction of sodium handling area is particularly effective in reducing construction costs. (author)

  15. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.

    Science.gov (United States)

    Zhang, Hong; Pei, Yun

    2016-08-12

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.

  16. Construction cost prediction model for conventional and sustainable college buildings in North America

    Directory of Open Access Journals (Sweden)

    Othman Subhi Alshamrani

    2017-03-01

    Full Text Available The literature lacks in initial cost prediction models for college buildings, especially comparing costs of sustainable and conventional buildings. A multi-regression model was developed for conceptual initial cost estimation of conventional and sustainable college buildings in North America. RS Means was used to estimate the national average of construction costs for 2014, which was subsequently utilized to develop the model. The model could predict the initial cost per square feet with two structure types made of steel and concrete. The other predictor variables were building area, number of floors and floor height. The model was developed in three major stages, such as preliminary diagnostics on data quality, model development and validation. The developed model was successfully tested and validated with real-time data.

  17. Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey

    Directory of Open Access Journals (Sweden)

    Abdelrahman Osman Elfaki

    2014-01-01

    Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.

  18. Adjusting estimative prediction limits

    OpenAIRE

    Masao Ueki; Kaoru Fueda

    2007-01-01

    This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.

  19. A Semantics-Based Approach to Construction Cost Estimating

    Science.gov (United States)

    Niknam, Mehrdad

    2015-01-01

    A construction project requires collaboration of different organizations such as owner, designer, contractor, and resource suppliers. These organizations need to exchange information to improve their teamwork. Understanding the information created in other organizations requires specialized human resources. Construction cost estimating is one of…

  20. An Estimation of Construction and Demolition Debris in Seoul, Korea: Waste Amount, Type, and Estimating Model.

    Science.gov (United States)

    Seo, Seongwon; Hwang, Yongwoo

    1999-08-01

    Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.

  1. Construction Worker Fatigue Prediction Model Based on System Dynamic

    Directory of Open Access Journals (Sweden)

    Wahyu Adi Tri Joko

    2017-01-01

    Full Text Available Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD. System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate the model, 93 construction workers whom worked in a high rise building construction projects, were used as case study. The result shows that excessive workload, working elevation and age, are the main factors lead to construction worker fatigue. Simulation result also shows that these factors can increase worker fatigue level to 21.2% times compared to normal condition. Beside predicting worker fatigue level this model can also be used as early warning system to prevent construction worker accident

  2. A comparative analysis of methods to represent uncertainty in estimating the cost of constructing wastewater treatment plants.

    Science.gov (United States)

    Chen, Ho-Wen; Chang, Ni-Bin

    2002-08-01

    Prediction of construction cost of wastewater treatment facilities could be influential for the economic feasibility of various levels of water pollution control programs. However, construction cost estimation is difficult to precisely evaluate in an uncertain environment and measured quantities are always burdened with different types of cost structures. Therefore, an understanding of the previous development of wastewater treatment plants and of the related construction cost structures of those facilities becomes essential for dealing with an effective regional water pollution control program. But deviations between the observed values and the estimated values are supposed to be due to measurement errors only in the conventional regression models. The inherent uncertainties of the underlying cost structure, where the human estimation is influential, are rarely explored. This paper is designed to recast a well-known problem of construction cost estimation for both domestic and industrial wastewater treatment plants via a comparative framework. Comparisons were made for three technologies of regression analyses, including the conventional least squares regression method, the fuzzy linear regression method, and the newly derived fuzzy goal regression method. The case study, incorporating a complete database with 48 domestic wastewater treatment plants and 29 industrial wastewater treatment plants being collected in Taiwan, implements such a cost estimation procedure in an uncertain environment. Given that the fuzzy structure in regression estimation may account for the inherent human complexity in cost estimation, the fuzzy goal regression method does exhibit more robust results in terms of some criteria. Moderate economy of scale exists in constructing both the domestic and industrial wastewater treatment plants. Findings indicate that the optimal size of a domestic wastewater treatment plant is approximately equivalent to 15,000 m3/day (CMD) and higher in Taiwan

  3. Estimation of construction waste generation and management in Thailand.

    Science.gov (United States)

    Kofoworola, Oyeshola Femi; Gheewala, Shabbir H

    2009-02-01

    This study examines construction waste generation and management in Thailand. It is estimated that between 2002 and 2005, an average of 1.1 million tons of construction waste was generated per year in Thailand. This constitutes about 7.7% of the total amount of waste disposed in both landfills and open dumpsites annually during the same period. Although construction waste constitutes a major source of waste in terms of volume and weight, its management and recycling are yet to be effectively practiced in Thailand. Recently, the management of construction waste is being given attention due to its rapidly increasing unregulated dumping in undesignated areas, and recycling is being promoted as a method of managing this waste. If effectively implemented, its potential economic and social benefits are immense. It was estimated that between 70 and 4,000 jobs would have been created between 2002 and 2005, if all construction wastes in Thailand had been recycled. Additionally it would have contributed an average savings of about 3.0 x 10(5) GJ per year in the final energy consumed by the construction sector of the nation within the same period based on the recycling scenario analyzed. The current national integrated waste management plan could enhance the effective recycling of construction and demolition waste in Thailand when enforced. It is recommended that an inventory of all construction waste generated in the country be carried out in order to assess the feasibility of large scale recycling of construction and demolition waste.

  4. Quantifying and estimating the predictive accuracy for censored time-to-event data with competing risks.

    Science.gov (United States)

    Wu, Cai; Li, Liang

    2018-05-15

    This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Construction-man hour estimation for nuclear power plants

    International Nuclear Information System (INIS)

    Paek, J.H.

    1987-01-01

    This study centers on a statistical analysis of the preliminary construction time, main construction time, and total construction man hours of nuclear power plants. The use of these econometric techniques allows the major man hour driving variables to be identified through multivariate analysis of time-series data on over 80 United States nuclear power plants. The analysis made in this study provides a clearer picture of the dynamic changes that have occurred in the man hours of these plants when compared to engineering estimates of man hours, and produces a tool that can be used to project nuclear power plant man hours

  6. Estimation of construction and demolition waste using waste generation rates in Chennai, India.

    Science.gov (United States)

    Ram, V G; Kalidindi, Satyanarayana N

    2017-06-01

    A large amount of construction and demolition waste is being generated owing to rapid urbanisation in Indian cities. A reliable estimate of construction and demolition waste generation is essential to create awareness about this stream of solid waste among the government bodies in India. However, the required data to estimate construction and demolition waste generation in India are unavailable or not explicitly documented. This study proposed an approach to estimate construction and demolition waste generation using waste generation rates and demonstrated it by estimating construction and demolition waste generation in Chennai city. The demolition waste generation rates of primary materials were determined through regression analysis using waste generation data from 45 case studies. Materials, such as wood, electrical wires, doors, windows and reinforcement steel, were found to be salvaged and sold on the secondary market. Concrete and masonry debris were dumped in either landfills or unauthorised places. The total quantity of construction and demolition debris generated in Chennai city in 2013 was estimated to be 1.14 million tonnes. The proportion of masonry debris was found to be 76% of the total quantity of demolition debris. Construction and demolition debris forms about 36% of the total solid waste generated in Chennai city. A gross underestimation of construction and demolition waste generation in some earlier studies in India has also been shown. The methodology proposed could be utilised by government bodies, policymakers and researchers to generate reliable estimates of construction and demolition waste in other developing countries facing similar challenges of limited data availability.

  7. Construction Worker Fatigue Prediction Model Based on System Dynamic

    OpenAIRE

    Wahyu Adi Tri Joko; Ayu Ratnawinanda Lila

    2017-01-01

    Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD). System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate...

  8. Probabilistic cost estimating of nuclear power plant construction projects

    International Nuclear Information System (INIS)

    Finch, W.C.; Perry, L.W.; Postula, F.D.

    1978-01-01

    This paper shows how to identify and isolate cost accounts by developing probability trees down to component levels as justified by value and cost uncertainty. Examples are given of the procedure for assessing uncertainty in all areas contributing to cost: design, factory equipment pricing, and field labor and materials. The method of combining these individual uncertainties is presented so that the cost risk can be developed for components, systems and the total plant construction project. Formats which enable management to use the probabilistic cost estimate information for business planning and risk control are illustrated. Topics considered include code estimate performance, cost allocation, uncertainty encoding, probabilistic cost distributions, and interpretation. Effective cost control of nuclear power plant construction projects requires insight into areas of greatest cost uncertainty and a knowledge of the factors which can cause costs to vary from the single value estimates. It is concluded that probabilistic cost estimating can provide the necessary assessment of uncertainties both as to the cause and the consequences

  9. A consensus approach for estimating the predictive accuracy of dynamic models in biology.

    Science.gov (United States)

    Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Müller, Dirk; Balsa-Canto, Eva; Schmid, Joachim; Banga, Julio R

    2015-04-01

    Mathematical models that predict the complex dynamic behaviour of cellular networks are fundamental in systems biology, and provide an important basis for biomedical and biotechnological applications. However, obtaining reliable predictions from large-scale dynamic models is commonly a challenging task due to lack of identifiability. The present work addresses this challenge by presenting a methodology for obtaining high-confidence predictions from dynamic models using time-series data. First, to preserve the complex behaviour of the network while reducing the number of estimated parameters, model parameters are combined in sets of meta-parameters, which are obtained from correlations between biochemical reaction rates and between concentrations of the chemical species. Next, an ensemble of models with different parameterizations is constructed and calibrated. Finally, the ensemble is used for assessing the reliability of model predictions by defining a measure of convergence of model outputs (consensus) that is used as an indicator of confidence. We report results of computational tests carried out on a metabolic model of Chinese Hamster Ovary (CHO) cells, which are used for recombinant protein production. Using noisy simulated data, we find that the aggregated ensemble predictions are on average more accurate than the predictions of individual ensemble models. Furthermore, ensemble predictions with high consensus are statistically more accurate than ensemble predictions with large variance. The procedure provides quantitative estimates of the confidence in model predictions and enables the analysis of sufficiently complex networks as required for practical applications. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  11. Application of Boosting Regression Trees to Preliminary Cost Estimation in Building Construction Projects

    Directory of Open Access Journals (Sweden)

    Yoonseok Shin

    2015-01-01

    Full Text Available Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.

  12. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  13. Prediction equation for estimating total daily energy requirements of special operations personnel.

    Science.gov (United States)

    Barringer, N D; Pasiakos, S M; McClung, H L; Crombie, A P; Margolis, L M

    2018-01-01

    Special Operations Forces (SOF) engage in a variety of military tasks with many producing high energy expenditures, leading to undesired energy deficits and loss of body mass. Therefore, the ability to accurately estimate daily energy requirements would be useful for accurate logistical planning. Generate a predictive equation estimating energy requirements of SOF. Retrospective analysis of data collected from SOF personnel engaged in 12 different SOF training scenarios. Energy expenditure and total body water were determined using the doubly-labeled water technique. Physical activity level was determined as daily energy expenditure divided by resting metabolic rate. Physical activity level was broken into quartiles (0 = mission prep, 1 = common warrior tasks, 2 = battle drills, 3 = specialized intense activity) to generate a physical activity factor (PAF). Regression analysis was used to construct two predictive equations (Model A; body mass and PAF, Model B; fat-free mass and PAF) estimating daily energy expenditures. Average measured energy expenditure during SOF training was 4468 (range: 3700 to 6300) Kcal·d- 1 . Regression analysis revealed that physical activity level ( r  = 0.91; P  plan appropriate feeding regimens to meet SOF nutritional requirements across their mission profile.

  14. Star-sensor-based predictive Kalman filter for satelliteattitude estimation

    Institute of Scientific and Technical Information of China (English)

    林玉荣; 邓正隆

    2002-01-01

    A real-time attitude estimation algorithm, namely the predictive Kalman filter, is presented. This algorithm can accurately estimate the three-axis attitude of a satellite using only star sensor measurements. The implementation of the filter includes two steps: first, predicting the torque modeling error, and then estimating the attitude. Simulation results indicate that the predictive Kalman filter provides robust performance in the presence of both significant errors in the assumed model and in the initial conditions.

  15. Adaptive vehicle motion estimation and prediction

    Science.gov (United States)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  16. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    International Nuclear Information System (INIS)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A

    2011-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  17. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Energy Technology Data Exchange (ETDEWEB)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A, E-mail: tome@humonc.wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705 (United States)

    2011-02-07

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of {approx}0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  18. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Science.gov (United States)

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  19. Risk Consideration and Cost Estimation in Construction Projects Using Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    Claudius A. Peleskei

    2015-06-01

    Full Text Available Construction projects usually involve high investments. It is, therefore, a risky adventure for companies as actual costs of construction projects nearly always exceed the planed scenario. This is due to the various risks and the large uncertainty existing within this industry. Determination and quantification of risks and their impact on project costs within the construction industry is described to be one of the most difficult areas. This paper analyses how the cost of construction projects can be estimated using Monte Carlo Simulation. It investigates if the different cost elements in a construction project follow a specific probability distribution. The research examines the effect of correlation between different project costs on the result of the Monte Carlo Simulation. The paper finds out that Monte Carlo Simulation can be a helpful tool for risk managers and can be used for cost estimation of construction projects. The research has shown that cost distributions are positively skewed and cost elements seem to have some interdependent relationships.

  20. Prediction of RNA secondary structure using generalized centroid estimators.

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi

    2009-02-15

    Recent studies have shown that the methods for predicting secondary structures of RNAs on the basis of posterior decoding of the base-pairing probabilities has an advantage with respect to prediction accuracy over the conventionally utilized minimum free energy methods. However, there is room for improvement in the objective functions presented in previous studies, which are maximized in the posterior decoding with respect to the accuracy measures for secondary structures. We propose novel estimators which improve the accuracy of secondary structure prediction of RNAs. The proposed estimators maximize an objective function which is the weighted sum of the expected number of the true positives and that of the true negatives of the base pairs. The proposed estimators are also improved versions of the ones used in previous works, namely CONTRAfold for secondary structure prediction from a single RNA sequence and McCaskill-MEA for common secondary structure prediction from multiple alignments of RNA sequences. We clarify the relations between the proposed estimators and the estimators presented in previous works, and theoretically show that the previous estimators include additional unnecessary terms in the evaluation measures with respect to the accuracy. Furthermore, computational experiments confirm the theoretical analysis by indicating improvement in the empirical accuracy. The proposed estimators represent extensions of the centroid estimators proposed in Ding et al. and Carvalho and Lawrence, and are applicable to a wide variety of problems in bioinformatics. Supporting information and the CentroidFold software are available online at: http://www.ncrna.org/software/centroidfold/.

  1. Predictive value and construct validity of the work functioning screener-healthcare (WFS-H)

    Science.gov (United States)

    Boezeman, Edwin J.; Nieuwenhuijsen, Karen; Sluiter, Judith K.

    2016-01-01

    Objectives: To test the predictive value and convergent construct validity of a 6-item work functioning screener (WFS-H). Methods: Healthcare workers (249 nurses) completed a questionnaire containing the work functioning screener (WFS-H) and a work functioning instrument (NWFQ) measuring the following: cognitive aspects of task execution and general incidents, avoidance behavior, conflicts and irritation with colleagues, impaired contact with patients and their family, and level of energy and motivation. Productivity and mental health were also measured. Negative and positive predictive values, AUC values, and sensitivity and specificity were calculated to examine the predictive value of the screener. Correlation analysis was used to examine the construct validity. Results: The screener had good predictive value, since the results showed that a negative screener score is a strong indicator of work functioning not hindered by mental health problems (negative predictive values: 94%-98%; positive predictive values: 21%-36%; AUC:.64-.82; sensitivity: 42%-76%; and specificity 85%-87%). The screener has good construct validity due to moderate, but significant (pvalue and good construct validity. Its score offers occupational health professionals a helpful preliminary insight into the work functioning of healthcare workers. PMID:27010085

  2. Predictive value and construct validity of the work functioning screener-healthcare (WFS-H).

    Science.gov (United States)

    Boezeman, Edwin J; Nieuwenhuijsen, Karen; Sluiter, Judith K

    2016-05-25

    To test the predictive value and convergent construct validity of a 6-item work functioning screener (WFS-H). Healthcare workers (249 nurses) completed a questionnaire containing the work functioning screener (WFS-H) and a work functioning instrument (NWFQ) measuring the following: cognitive aspects of task execution and general incidents, avoidance behavior, conflicts and irritation with colleagues, impaired contact with patients and their family, and level of energy and motivation. Productivity and mental health were also measured. Negative and positive predictive values, AUC values, and sensitivity and specificity were calculated to examine the predictive value of the screener. Correlation analysis was used to examine the construct validity. The screener had good predictive value, since the results showed that a negative screener score is a strong indicator of work functioning not hindered by mental health problems (negative predictive values: 94%-98%; positive predictive values: 21%-36%; AUC:.64-.82; sensitivity: 42%-76%; and specificity 85%-87%). The screener has good construct validity due to moderate, but significant (ppredictive value and good construct validity. Its score offers occupational health professionals a helpful preliminary insight into the work functioning of healthcare workers.

  3. Estimating diesel fuel consumption and carbon dioxide emissions from forest road construction

    Science.gov (United States)

    Dan Loeffler; Greg Jones; Nikolaus Vonessen; Sean Healey; Woodam Chung

    2009-01-01

    Forest access road construction is a necessary component of many on-the-ground forest vegetation treatment projects. However, the fuel energy requirements and associated carbon dioxide emissions from forest road construction are unknown. We present a method for estimating diesel fuel consumed and related carbon dioxide emissions from constructing forest roads using...

  4. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  5. Multinomial Logistic Regression & Bootstrapping for Bayesian Estimation of Vertical Facies Prediction in Heterogeneous Sandstone Reservoirs

    Science.gov (United States)

    Al-Mudhafar, W. J.

    2013-12-01

    Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly

  6. Uncertainty Estimation using Bootstrapped Kriging Predictions for Precipitation Isoscapes

    Science.gov (United States)

    Ma, C.; Bowen, G. J.; Vander Zanden, H.; Wunder, M.

    2017-12-01

    Isoscapes are spatial models representing the distribution of stable isotope values across landscapes. Isoscapes of hydrogen and oxygen in precipitation are now widely used in a diversity of fields, including geology, biology, hydrology, and atmospheric science. To generate isoscapes, geostatistical methods are typically applied to extend predictions from limited data measurements. Kriging is a popular method in isoscape modeling, but quantifying the uncertainty associated with the resulting isoscapes is challenging. Applications that use precipitation isoscapes to determine sample origin require estimation of uncertainty. Here we present a simple bootstrap method (SBM) to estimate the mean and uncertainty of the krigged isoscape and compare these results with a generalized bootstrap method (GBM) applied in previous studies. We used hydrogen isotopic data from IsoMAP to explore these two approaches for estimating uncertainty. We conducted 10 simulations for each bootstrap method and found that SBM results in more kriging predictions (9/10) compared to GBM (4/10). Prediction from SBM was closer to the original prediction generated without bootstrapping and had less variance than GBM. SBM was tested on different datasets from IsoMAP with different numbers of observation sites. We determined that predictions from the datasets with fewer than 40 observation sites using SBM were more variable than the original prediction. The approaches we used for estimating uncertainty will be compiled in an R package that is under development. We expect that these robust estimates of precipitation isoscape uncertainty can be applied in diagnosing the origin of samples ranging from various type of waters to migratory animals, food products, and humans.

  7. Budget estimates: Fiscal year 1994. Volume 2: Construction of facilities

    Science.gov (United States)

    1994-01-01

    The Construction of Facilities (CoF) appropriation provides contractual services for the repair, rehabilitation, and modification of existing facilities; the construction of new facilities and the acquisition of related collateral equipment; the acquisition or condemnation of real property; environmental compliance and restoration activities; the design of facilities projects; and advanced planning related to future facilities needs. Fiscal year 1994 budget estimates are broken down according to facility location of project and by purpose.

  8. Construction of ontology augmented networks for protein complex prediction.

    Science.gov (United States)

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian

    2013-01-01

    Protein complexes are of great importance in understanding the principles of cellular organization and function. The increase in available protein-protein interaction data, gene ontology and other resources make it possible to develop computational methods for protein complex prediction. Most existing methods focus mainly on the topological structure of protein-protein interaction networks, and largely ignore the gene ontology annotation information. In this article, we constructed ontology augmented networks with protein-protein interaction data and gene ontology, which effectively unified the topological structure of protein-protein interaction networks and the similarity of gene ontology annotations into unified distance measures. After constructing ontology augmented networks, a novel method (clustering based on ontology augmented networks) was proposed to predict protein complexes, which was capable of taking into account the topological structure of the protein-protein interaction network, as well as the similarity of gene ontology annotations. Our method was applied to two different yeast protein-protein interaction datasets and predicted many well-known complexes. The experimental results showed that (i) ontology augmented networks and the unified distance measure can effectively combine the structure closeness and gene ontology annotation similarity; (ii) our method is valuable in predicting protein complexes and has higher F1 and accuracy compared to other competing methods.

  9. Kernel density estimation-based real-time prediction for respiratory motion

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  10. Re-constructing historical Adélie penguin abundance estimates by retrospectively accounting for detection bias.

    Science.gov (United States)

    Southwell, Colin; Emmerson, Louise; Newbery, Kym; McKinlay, John; Kerry, Knowles; Woehler, Eric; Ensor, Paul

    2015-01-01

    Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.

  11. Re-constructing historical Adélie penguin abundance estimates by retrospectively accounting for detection bias.

    Directory of Open Access Journals (Sweden)

    Colin Southwell

    Full Text Available Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.

  12. Predictive Power Estimation Algorithm (PPEA--a new algorithm to reduce overfitting for genomic biomarker discovery.

    Directory of Open Access Journals (Sweden)

    Jiangang Liu

    Full Text Available Toxicogenomics promises to aid in predicting adverse effects, understanding the mechanisms of drug action or toxicity, and uncovering unexpected or secondary pharmacology. However, modeling adverse effects using high dimensional and high noise genomic data is prone to over-fitting. Models constructed from such data sets often consist of a large number of genes with no obvious functional relevance to the biological effect the model intends to predict that can make it challenging to interpret the modeling results. To address these issues, we developed a novel algorithm, Predictive Power Estimation Algorithm (PPEA, which estimates the predictive power of each individual transcript through an iterative two-way bootstrapping procedure. By repeatedly enforcing that the sample number is larger than the transcript number, in each iteration of modeling and testing, PPEA reduces the potential risk of overfitting. We show with three different cases studies that: (1 PPEA can quickly derive a reliable rank order of predictive power of individual transcripts in a relatively small number of iterations, (2 the top ranked transcripts tend to be functionally related to the phenotype they are intended to predict, (3 using only the most predictive top ranked transcripts greatly facilitates development of multiplex assay such as qRT-PCR as a biomarker, and (4 more importantly, we were able to demonstrate that a small number of genes identified from the top-ranked transcripts are highly predictive of phenotype as their expression changes distinguished adverse from nonadverse effects of compounds in completely independent tests. Thus, we believe that the PPEA model effectively addresses the over-fitting problem and can be used to facilitate genomic biomarker discovery for predictive toxicology and drug responses.

  13. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.

    Science.gov (United States)

    Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter

    2013-12-06

    In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least

  14. Optimal design criteria - prediction vs. parameter estimation

    Science.gov (United States)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  15. Estimating additive and non-additive genetic variances and predicting genetic merits using genome-wide dense single nucleotide polymorphism markers.

    Directory of Open Access Journals (Sweden)

    Guosheng Su

    Full Text Available Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1 a simple additive genetic model (MA, 2 a model including both additive and additive by additive epistatic genetic effects (MAE, 3 a model including both additive and dominance genetic effects (MAD, and 4 a full model including all three genetic components (MAED. Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions.

  16. Unit Price and Cost Estimation Equations through Items Percentage of Construction Works in a Desert Area

    Directory of Open Access Journals (Sweden)

    Kadhim Raheem

    2015-02-01

    Full Text Available This research will cover different aspects of estimating process of construction work in a desert area. The inherent difficulties which accompany the cost estimating of the construction works in desert environment in a developing country, will stem from the limited information available, resources scarcity, low level of skilled workers, the prevailing severe weather conditions and many others, which definitely don't provide a fair, reliable and accurate estimation. This study tries to present unit price to estimate the cost in preliminary phase of a project. Estimations are supported by developing mathematical equations based on the historical data of maintenance, new construction of managerial and school projects. Meanwhile, the research has determined the percentage of project items, in such a remote environment. Estimation equations suitable for remote areas have been formulated. Moreover, a procedure for unite price calculation is concluded.

  17. Cost estimation using ministerial regulation of public work no. 11/2013 in construction projects

    Science.gov (United States)

    Arumsari, Putri; Juliastuti; Khalifah Al'farisi, Muhammad

    2017-12-01

    One of the first tasks in starting a construction project is to estimate the total cost of building a project. In Indonesia there are several standards that are used to calculate the cost estimation of a project. One of the standards used in based on the Ministerial Regulation of Public Work No. 11/2013. However in a construction project, contractor often has their own cost estimation based on their own calculation. This research aimed to compare the construction project total cost using calculation based on the Ministerial Regulation of Public Work No. 11/2013 against the contractor’s calculation. Two projects were used as case study to compare the results. The projects were a 4 storey building located in Pantai Indah Kapuk area (West Jakarta) and a warehouse located in Sentul (West Java) which was built by 2 different contractors. The cost estimation from both contractors’ calculation were compared to the one based on the Ministerial Regulation of Public Work No. 11/2013. It is found that there were differences between the two calculation around 1.80 % - 3.03% in total cost, in which the cost estimation based on Ministerial Regulation was higher than the contractors’ calculations.

  18. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  19. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  20. Estimation of construction and demolition waste volume generation in new residential buildings in Spain.

    Science.gov (United States)

    Villoria Sáez, Paola; del Río Merino, Mercedes; Porras-Amores, César

    2012-02-01

    The management planning of construction and demolition (C&D) waste uses a single indicator which does not provide enough detailed information. Therefore the determination and implementation of other innovative and precise indicators should be determined. The aim of this research work is to improve existing C&D waste quantification tools in the construction of new residential buildings in Spain. For this purpose, several housing projects were studied to determine an estimation of C&D waste generated during their construction process. This paper determines the values of three indicators to estimate the generation of C&D waste in new residential buildings in Spain, itemizing types of waste and construction stages. The inclusion of two more accurate indicators, in addition to the global one commonly in use, provides a significant improvement in C&D waste quantification tools and management planning.

  1. Adaptive Disturbance Estimation for Offset-Free SISO Model Predictive Control

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2011-01-01

    Offset free tracking in Model Predictive Control requires estimation of unmeasured disturbances or the inclusion of an integrator. An algorithm for estimation of an unknown disturbance based on adaptive estimation with time varying forgetting is introduced and benchmarked against the classical...

  2. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

    KAUST Repository

    Sawlan, Zaid A

    2012-12-01

    Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.

  3. Wind gust estimation by combining numerical weather prediction model and statistical post-processing

    Science.gov (United States)

    Patlakas, Platon; Drakaki, Eleni; Galanis, George; Spyrou, Christos; Kallos, George

    2017-04-01

    The continuous rise of off-shore and near-shore activities as well as the development of structures, such as wind farms and various offshore platforms, requires the employment of state-of-the-art risk assessment techniques. Such analysis is used to set the safety standards and can be characterized as a climatologically oriented approach. Nevertheless, a reliable operational support is also needed in order to minimize cost drawbacks and human danger during the construction and the functioning stage as well as during maintenance activities. One of the most important parameters for this kind of analysis is the wind speed intensity and variability. A critical measure associated with this variability is the presence and magnitude of wind gusts as estimated in the reference level of 10m. The latter can be attributed to different processes that vary among boundary-layer turbulence, convection activities, mountain waves and wake phenomena. The purpose of this work is the development of a wind gust forecasting methodology combining a Numerical Weather Prediction model and a dynamical statistical tool based on Kalman filtering. To this end, the parameterization of Wind Gust Estimate method was implemented to function within the framework of the atmospheric model SKIRON/Dust. The new modeling tool combines the atmospheric model with a statistical local adaptation methodology based on Kalman filters. This has been tested over the offshore west coastline of the United States. The main purpose is to provide a useful tool for wind analysis and prediction and applications related to offshore wind energy (power prediction, operation and maintenance). The results have been evaluated by using observational data from the NOAA's buoy network. As it was found, the predicted output shows a good behavior that is further improved after the local adjustment post-process.

  4. On the estimation and testing of predictive panel regressions

    NARCIS (Netherlands)

    Karabiyik, H.; Westerlund, Joakim; Narayan, Paresh

    2016-01-01

    Hjalmarsson (2010) considers an OLS-based estimator of predictive panel regressions that is argued to be mixed normal under very general conditions. In a recent paper, Westerlund et al. (2016) show that while consistent, the estimator is generally not mixed normal, which invalidates standard normal

  5. Testing the Predictive Validity and Construct of Pathological Video Game Use

    Science.gov (United States)

    Groves, Christopher L.; Gentile, Douglas; Tapscott, Ryan L.; Lynch, Paul J.

    2015-01-01

    Three studies assessed the construct of pathological video game use and tested its predictive validity. Replicating previous research, Study 1 produced evidence of convergent validity in 8th and 9th graders (N = 607) classified as pathological gamers. Study 2 replicated and extended the findings of Study 1 with college undergraduates (N = 504). Predictive validity was established in Study 3 by measuring cue reactivity to video games in college undergraduates (N = 254), such that pathological gamers were more emotionally reactive to and provided higher subjective appraisals of video games than non-pathological gamers and non-gamers. The three studies converged to show that pathological video game use seems similar to other addictions in its patterns of correlations with other constructs. Conceptual and definitional aspects of Internet Gaming Disorder are discussed. PMID:26694472

  6. Testing the Predictive Validity and Construct of Pathological Video Game Use

    Directory of Open Access Journals (Sweden)

    Christopher L. Groves

    2015-12-01

    Full Text Available Three studies assessed the construct of pathological video game use and tested its predictive validity. Replicating previous research, Study 1 produced evidence of convergent validity in 8th and 9th graders (N = 607 classified as pathological gamers. Study 2 replicated and extended the findings of Study 1 with college undergraduates (N = 504. Predictive validity was established in Study 3 by measuring cue reactivity to video games in college undergraduates (N = 254, such that pathological gamers were more emotionally reactive to and provided higher subjective appraisals of video games than non-pathological gamers and non-gamers. The three studies converged to show that pathological video game use seems similar to other addictions in its patterns of correlations with other constructs. Conceptual and definitional aspects of Internet Gaming Disorder are discussed.

  7. Construction cost estimation of spherical storage tanks: artificial neural networks and hybrid regression—GA algorithms

    Science.gov (United States)

    Arabzadeh, Vida; Niaki, S. T. A.; Arabzadeh, Vahid

    2017-10-01

    One of the most important processes in the early stages of construction projects is to estimate the cost involved. This process involves a wide range of uncertainties, which make it a challenging task. Because of unknown issues, using the experience of the experts or looking for similar cases are the conventional methods to deal with cost estimation. The current study presents data-driven methods for cost estimation based on the application of artificial neural network (ANN) and regression models. The learning algorithms of the ANN are the Levenberg-Marquardt and the Bayesian regulated. Moreover, regression models are hybridized with a genetic algorithm to obtain better estimates of the coefficients. The methods are applied in a real case, where the input parameters of the models are assigned based on the key issues involved in a spherical tank construction. The results reveal that while a high correlation between the estimated cost and the real cost exists; both ANNs could perform better than the hybridized regression models. In addition, the ANN with the Levenberg-Marquardt learning algorithm (LMNN) obtains a better estimation than the ANN with the Bayesian-regulated learning algorithm (BRNN). The correlation between real data and estimated values is over 90%, while the mean square error is achieved around 0.4. The proposed LMNN model can be effective to reduce uncertainty and complexity in the early stages of the construction project.

  8. Distributed estimation based on observations prediction in wireless sensor networks

    KAUST Repository

    Bouchoucha, Taha

    2015-03-19

    We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.

  9. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  10. Prediction of indoor radon concentration based on residence location and construction

    International Nuclear Information System (INIS)

    Maekelaeinen, I.; Voutilainen, A.; Castren, O.

    1992-01-01

    We have constructed a model for assessing indoor radon concentrations in houses where measurements cannot be performed. It has been used in an epidemiological study and to determine the radon potential of new building sites. The model is based on data from about 10,000 buildings. Integrated radon measurements were made during the cold season in all the houses; their geographic coordinates were also known. The 2-mo measurement results were corrected to annual average concentrations. Construction data were collected from questionnaires completed by residents; geological data were determined from geological maps. Data were classified according to geographical, geological, and construction factors. In order to describe different radon production levels, the country was divided into four zones. We assumed that the factors were multiplicative, and a linear concentration-prediction model was used. The most significant factor in determining radon concentration was the geographical region, followed by soil type, year of construction, and type of foundation. The predicted indoor radon concentrations given by the model varied from 50 to 440 Bq m -3 . The lower figure represents a house with a basement, built in the 1950s on clay soil, in the region with the lowest radon concentration levels. The higher value represents a house with a concrete slab in contact with the ground, built in the 1980s, on gravel, in the region with the highest average radon concentration

  11. Predictive framework for estimating exposure of birds to pharmaceuticals

    Science.gov (United States)

    Bean, Thomas G.; Arnold, Kathryn E.; Lane, Julie M.; Bergström, Ed; Thomas-Oates, Jane; Rattner, Barnett A.; Boxall, Allistair B.A.

    2017-01-01

    We present and evaluate a framework for estimating concentrations of pharmaceuticals over time in wildlife feeding at wastewater treatment plants (WWTPs). The framework is composed of a series of predictive steps involving the estimation of pharmaceutical concentration in wastewater, accumulation into wildlife food items, and uptake by wildlife with subsequent distribution into, and elimination from, tissues. Because many pharmacokinetic parameters for wildlife are unavailable for the majority of drugs in use, a read-across approach was employed using either rodent or human data on absorption, distribution, metabolism, and excretion. Comparison of the different steps in the framework against experimental data for the scenario where birds are feeding on a WWTP contaminated with fluoxetine showed that estimated concentrations in wastewater treatment works were lower than measured concentrations; concentrations in food could be reasonably estimated if experimental bioaccumulation data are available; and read-across from rodent data worked better than human to bird read-across. The framework provides adequate predictions of plasma concentrations and of elimination behavior in birds but yields poor predictions of distribution in tissues. The approach holds promise, but it is important that we improve our understanding of the physiological similarities and differences between wild birds and domesticated laboratory mammals used in pharmaceutical efficacy/safety trials, so that the wealth of data available can be applied more effectively in ecological risk assessments.

  12. Uncertainty estimation and risk prediction in air quality

    International Nuclear Information System (INIS)

    Garaud, Damien

    2011-01-01

    This work is about uncertainty estimation and risk prediction in air quality. Firstly, we build a multi-model ensemble of air quality simulations which can take into account all uncertainty sources related to air quality modeling. Ensembles of photochemical simulations at continental and regional scales are automatically generated. Then, these ensemble are calibrated with a combinatorial optimization method. It selects a sub-ensemble which is representative of uncertainty or shows good resolution and reliability for probabilistic forecasting. This work shows that it is possible to estimate and forecast uncertainty fields related to ozone and nitrogen dioxide concentrations or to improve the reliability of threshold exceedance predictions. The approach is compared with Monte Carlo simulations, calibrated or not. The Monte Carlo approach appears to be less representative of the uncertainties than the multi-model approach. Finally, we quantify the observational error, the representativeness error and the modeling errors. The work is applied to the impact of thermal power plants, in order to quantify the uncertainty on the impact estimates. (author) [fr

  13. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  14. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  15. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  16. Predictive framework for estimating exposure of birds to pharmaceuticals.

    Science.gov (United States)

    Bean, Thomas G; Arnold, Kathryn E; Lane, Julie M; Bergström, Ed; Thomas-Oates, Jane; Rattner, Barnett A; Boxall, Alistair B A

    2017-09-01

    We present and evaluate a framework for estimating concentrations of pharmaceuticals over time in wildlife feeding at wastewater treatment plants (WWTPs). The framework is composed of a series of predictive steps involving the estimation of pharmaceutical concentration in wastewater, accumulation into wildlife food items, and uptake by wildlife with subsequent distribution into, and elimination from, tissues. Because many pharmacokinetic parameters for wildlife are unavailable for the majority of drugs in use, a read-across approach was employed using either rodent or human data on absorption, distribution, metabolism, and excretion. Comparison of the different steps in the framework against experimental data for the scenario where birds are feeding on a WWTP contaminated with fluoxetine showed that estimated concentrations in wastewater treatment works were lower than measured concentrations; concentrations in food could be reasonably estimated if experimental bioaccumulation data are available; and read-across from rodent data worked better than human to bird read-across. The framework provides adequate predictions of plasma concentrations and of elimination behavior in birds but yields poor predictions of distribution in tissues. The approach holds promise, but it is important that we improve our understanding of the physiological similarities and differences between wild birds and domesticated laboratory mammals used in pharmaceutical efficacy/safety trials, so that the wealth of data available can be applied more effectively in ecological risk assessments. Environ Toxicol Chem 2017;36:2335-2344. © 2017 SETAC. © 2017 SETAC.

  17. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    Science.gov (United States)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  18. Construction and Demolition Debris 2014 US Final Disposition Estimates Using the CDDPath Method

    Data.gov (United States)

    U.S. Environmental Protection Agency — Estimates of the final amount and final disposition of materials generated in the Construction and Demolition waste stream measured in total mass of each material....

  19. Method for estimating capacity and predicting remaining useful life of lithium-ion battery

    International Nuclear Information System (INIS)

    Hu, Chao; Jain, Gaurav; Tamirisa, Prabhakar; Gorka, Tom

    2014-01-01

    Highlights: • We develop an integrated method for the capacity estimation and RUL prediction. • A state projection scheme is derived for capacity estimation. • The Gauss–Hermite particle filter technique is used for the RUL prediction. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the capacity of Li-ion battery and predict the remaining useful life (RUL) throughout the whole life-time. This paper presents an integrated method for the capacity estimation and RUL prediction of Li-ion battery used in implantable medical devices. A state projection scheme from the author’s previous study is used for the capacity estimation. Then, based on the capacity estimates, the Gauss–Hermite particle filter technique is used to project the capacity fade to the end-of-service (EOS) value (or the failure limit) for the RUL prediction. Results of 10 years’ continuous cycling test on Li-ion prismatic cells in the lab suggest that the proposed method achieves good accuracy in the capacity estimation and captures the uncertainty in the RUL prediction. Post-explant weekly cycling data obtained from field cells with 4–7 implant years further verify the effectiveness of the proposed method in the capacity estimation

  20. Forecasting Construction Cost Index based on visibility graph: A network approach

    Science.gov (United States)

    Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong

    2018-03-01

    Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.

  1. Body composition in elderly people: effect of criterion estimates on predictive equations

    International Nuclear Information System (INIS)

    Baumgartner, R.N.; Heymsfield, S.B.; Lichtman, S.; Wang, J.; Pierson, R.N. Jr.

    1991-01-01

    The purposes of this study were to determine whether there are significant differences between two- and four-compartment model estimates of body composition, whether these differences are associated with aqueous and mineral fractions of the fat-free mass (FFM); and whether the differences are retained in equations for predicting body composition from anthropometry and bioelectric resistance. Body composition was estimated in 98 men and women aged 65-94 y by using a four-compartment model based on hydrodensitometry, 3 H 2 O dilution, and dual-photon absorptiometry. These estimates were significantly different from those obtained by using Siri's two-compartment model. The differences were associated significantly (P less than 0.0001) with variation in the aqueous fraction of FFM. Equations for predicting body composition from anthropometry and resistance, when calibrated against two-compartment model estimates, retained these systematic errors. Equations predicting body composition in elderly people should be calibrated against estimates from multicompartment models that consider variability in FFM composition

  2. An estimator-based distributed voltage-predictive control strategy for ac islanded microgrids

    DEFF Research Database (Denmark)

    Wang, Yanbo; Chen, Zhe; Wang, Xiongfei

    2015-01-01

    This paper presents an estimator-based voltage predictive control strategy for AC islanded microgrids, which is able to perform voltage control without any communication facilities. The proposed control strategy is composed of a network voltage estimator and a voltage predictive controller for each...... and has a good capability to reject uncertain perturbations of islanded microgrids....

  3. Probability estimate of confirmability of the value of predicted oil and gas reserves of the Chechen-Ingushetiya. Veroyatnostnaya otsenka podtverzhdaemosti velichiny prognoznykh zapasov nefti is gaza Checheno-Ingushetii

    Energy Technology Data Exchange (ETDEWEB)

    Merkulov, N.E.; Lysenkov, P.P.

    1981-01-01

    Estimated are the reliable predicted reserves of oil and gas of the Chechen-Ingushetia by methods of probability calculations. Calculations were made separately for each oil-bearing lithologic-stratigraphic horizon. The computation results are summarized in a table, and graphs are constructed.

  4. Penalized regression techniques for prediction: a case study for predicting tree mortality using remotely sensed vegetation indices

    NARCIS (Netherlands)

    Lazaridis, D.C.; Verbesselt, J.; Robinson, A.P.

    2011-01-01

    Constructing models can be complicated when the available fitting data are highly correlated and of high dimension. However, the complications depend on whether the goal is prediction instead of estimation. We focus on predicting tree mortality (measured as the number of dead trees) from change

  5. Oil and gas pipeline construction cost analysis and developing regression models for cost estimation

    Science.gov (United States)

    Thaduri, Ravi Kiran

    In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.

  6. Exploring the Predictive Validity of the Susceptibility to Smoking Construct for Tobacco Cigarettes, Alternative Tobacco Products, and E-Cigarettes.

    Science.gov (United States)

    Cole, Adam G; Kennedy, Ryan David; Chaurasia, Ashok; Leatherdale, Scott T

    2017-12-06

    Within tobacco prevention programming, it is useful to identify youth that are at risk for experimenting with various tobacco products and e-cigarettes. The susceptibility to smoking construct is a simple method to identify never-smoking students that are less committed to remaining smoke-free. However, the predictive validity of this construct has not been tested within the Canadian context or for the use of other tobacco products and e-cigarettes. This study used a large, longitudinal sample of secondary school students that reported never using tobacco cigarettes and non-current use of alternative tobacco products or e-cigarettes at baseline in Ontario, Canada. The sensitivity, specificity, and positive and negative predictive values of the susceptibility construct for predicting tobacco cigarette, e-cigarette, cigarillo or little cigar, cigar, hookah, and smokeless tobacco use one and two years after baseline measurement were calculated. At baseline, 29.4% of the sample was susceptible to future tobacco product or e-cigarette use. The sensitivity of the construct ranged from 43.2% (smokeless tobacco) to 59.5% (tobacco cigarettes), the specificity ranged from 70.9% (smokeless tobacco) to 75.9% (tobacco cigarettes), and the positive predictive value ranged from 2.6% (smokeless tobacco) to 32.2% (tobacco cigarettes). Similar values were calculated for each measure of the susceptibility construct. A significant number of youth that did not currently use tobacco products or e-cigarettes at baseline reported using tobacco products and e-cigarettes over a two-year follow-up period. The predictive validity of the susceptibility construct was high and the construct can be used to predict other tobacco product and e-cigarette use among youth. This study presents the predictive validity of the susceptibility construct for the use of tobacco cigarettes among secondary school students in Ontario, Canada. It also presents a novel use of the susceptibility construct for

  7. A practical approach to parameter estimation applied to model predicting heart rate regulation

    DEFF Research Database (Denmark)

    Olufsen, Mette; Ottesen, Johnny T.

    2013-01-01

    Mathematical models have long been used for prediction of dynamics in biological systems. Recently, several efforts have been made to render these models patient specific. One way to do so is to employ techniques to estimate parameters that enable model based prediction of observed quantities....... Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...... a subset of model parameters reducing the complexity of the problem. In this study, we compare three methods that allow identification of parameter subsets that can be estimated given a model and a set of data. These methods will be used to estimate patient specific parameters in a model predicting...

  8. AES, Automated Construction Cost Estimation System

    International Nuclear Information System (INIS)

    Holder, D.A.

    1995-01-01

    A - Description of program or function: AES (Automated Estimating System) enters and updates the detailed cost, schedule, contingency, and escalation information contained in a typical construction or other project cost estimates. It combines this information to calculate both un-escalated and escalated and cash flow values for the project. These costs can be reported at varying levels of detail. AES differs from previous versions in at least the following ways: The schedule is entered at the WBS-Participant, Activity level - multiple activities can be assigned to each WBS-Participant combination; the spending curve is defined at the schedule activity level and a weighing factor is defined which determines percentage of cost for the WBS-Participant applied to the schedule activity; Schedule by days instead of Fiscal Year/Quarter; Sales Tax is applied at the Line Item Level- a sales tax codes is selected to indicate Material, Large Single Item, or Professional Services; a 'data filter' has been added to allow user to define data the report is to be generated for. B - Method of solution: Average Escalation Rate: The average escalation for a Bill of is calculated in three steps. 1. A table of quarterly escalation factors is calculated based on the base fiscal year and quarter of the project entered in the estimate record and the annual escalation rates entered in the Standard Value File. 2. The percentage distribution of costs by quarter for the Bill of Material is calculated based on the schedule entered and the curve type. 3. The percent in each fiscal year and quarter in the distribution is multiplied by the escalation factor for the fiscal year and quarter. The sum of these results is the average escalation rate for that Bill of Material. Schedule by curve: The allocation of costs to specific time periods is dependent on three inputs, starting schedule date, ending schedule date, and the percentage of costs allocated to each quarter. Contingency Analysis: The

  9. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  10. Construction of Models for Nondestructive Prediction of Ingredient Contents in Blueberries by Near-infrared Spectroscopy Based on HPLC Measurements.

    Science.gov (United States)

    Bai, Wenming; Yoshimura, Norio; Takayanagi, Masao; Che, Jingai; Horiuchi, Naomi; Ogiwara, Isao

    2016-06-28

    Nondestructive prediction of ingredient contents of farm products is useful to ship and sell the products with guaranteed qualities. Here, near-infrared spectroscopy is used to predict nondestructively total sugar, total organic acid, and total anthocyanin content in each blueberry. The technique is expected to enable the selection of only delicious blueberries from all harvested ones. The near-infrared absorption spectra of blueberries are measured with the diffuse reflectance mode at the positions not on the calyx. The ingredient contents of a blueberry determined by high-performance liquid chromatography are used to construct models to predict the ingredient contents from observed spectra. Partial least squares regression is used for the construction of the models. It is necessary to properly select the pretreatments for the observed spectra and the wavelength regions of the spectra used for analyses. Validations are necessary for the constructed models to confirm that the ingredient contents are predicted with practical accuracies. Here we present a protocol to construct and validate the models for nondestructive prediction of ingredient contents in blueberries by near-infrared spectroscopy.

  11. Prediction of cardiovascular outcome by estimated glomerular filtration rate and estimated creatinine clearance in the high-risk hypertension population of the VALUE trial.

    Science.gov (United States)

    Ruilope, Luis M; Zanchetti, Alberto; Julius, Stevo; McInnes, Gordon T; Segura, Julian; Stolt, Pelle; Hua, Tsushung A; Weber, Michael A; Jamerson, Ken

    2007-07-01

    Reduced renal function is predictive of poor cardiovascular outcomes but the predictive value of different measures of renal function is uncertain. We compared the value of estimated creatinine clearance, using the Cockcroft-Gault formula, with that of estimated glomerular filtration rate (GFR), using the Modification of Diet in Renal Disease (MDRD) formula, as predictors of cardiovascular outcome in 15 245 high-risk hypertensive participants in the Valsartan Antihypertensive Long-term Use Evaluation (VALUE) trial. For the primary end-point, the three secondary end-points and for all-cause death, outcomes were compared for individuals with baseline estimated creatinine clearance and estimated GFR or = 60 ml/min using hazard ratios and 95% confidence intervals. Coronary heart disease, left ventricular hypertrophy, age, sex and treatment effects were included as covariates in the model. For each end-point considered, the risk in individuals with poor renal function at baseline was greater than in those with better renal function. Estimated creatinine clearance (Cockcroft-Gault) was significantly predictive only of all-cause death [hazard ratio = 1.223, 95% confidence interval (CI) = 1.076-1.390; P = 0.0021] whereas estimated GFR was predictive of all outcomes except stroke. Hazard ratios (95% CIs) for estimated GFR were: primary cardiac end-point, 1.497 (1.332-1.682), P cause death, 1.231 (1.098-1.380), P = 0.0004. These results indicate that estimated glomerular filtration rate calculated with the MDRD formula is more informative than estimated creatinine clearance (Cockcroft-Gault) in the prediction of cardiovascular outcomes.

  12. Predictive Uncertainty Estimation in Water Demand Forecasting Using the Model Conditional Processor

    Directory of Open Access Journals (Sweden)

    Amos O. Anele

    2018-04-01

    Full Text Available In a previous paper, a number of potential models for short-term water demand (STWD prediction have been analysed to find the ones with the best fit. The results obtained in Anele et al. (2017 showed that hybrid models may be considered as the accurate and appropriate forecasting models for STWD prediction. However, such best single valued forecast does not guarantee reliable and robust decisions, which can be properly obtained via model uncertainty processors (MUPs. MUPs provide an estimate of the full predictive densities and not only the single valued expected prediction. Amongst other MUPs, the purpose of this paper is to use the multi-variate version of the model conditional processor (MCP, proposed by Todini (2008, to demonstrate how the estimation of the predictive probability conditional to a number of relatively good predictive models may improve our knowledge, thus reducing the predictive uncertainty (PU when forecasting into the unknown future. Through the MCP approach, the probability distribution of the future water demand can be assessed depending on the forecast provided by one or more deterministic forecasting models. Based on an average weekly data of 168 h, the probability density of the future demand is built conditional on three models’ predictions, namely the autoregressive-moving average (ARMA, feed-forward back propagation neural network (FFBP-NN and hybrid model (i.e., combined forecast from ARMA and FFBP-NN. The results obtained show that MCP may be effectively used for real-time STWD prediction since it brings out the PU connected to its forecast, and such information could help water utilities estimate the risk connected to a decision.

  13. Construction of risk prediction model of type 2 diabetes mellitus based on logistic regression

    Directory of Open Access Journals (Sweden)

    Li Jian

    2017-01-01

    Full Text Available Objective: to construct multi factor prediction model for the individual risk of T2DM, and to explore new ideas for early warning, prevention and personalized health services for T2DM. Methods: using logistic regression techniques to screen the risk factors for T2DM and construct the risk prediction model of T2DM. Results: Male’s risk prediction model logistic regression equation: logit(P=BMI × 0.735+ vegetables × (−0.671 + age × 0.838+ diastolic pressure × 0.296+ physical activity× (−2.287 + sleep ×(−0.009 +smoking ×0.214; Female’s risk prediction model logistic regression equation: logit(P=BMI ×1.979+ vegetables× (−0.292 + age × 1.355+ diastolic pressure× 0.522+ physical activity × (−2.287 + sleep × (−0.010.The area under the ROC curve of male was 0.83, the sensitivity was 0.72, the specificity was 0.86, the area under the ROC curve of female was 0.84, the sensitivity was 0.75, the specificity was 0.90. Conclusion: This study model data is from a compared study of nested case, the risk prediction model has been established by using the more mature logistic regression techniques, and the model is higher predictive sensitivity, specificity and stability.

  14. Predicting maintenance of attendance at walking groups: testing constructs from three leading maintenance theories.

    Science.gov (United States)

    Kassavou, Aikaterini; Turner, Andrew; Hamborg, Thomas; French, David P

    2014-07-01

    Little is known about the processes and factors that account for maintenance, with several theories existing that have not been subject to many empirical tests. The aim of this study was to test how well theoretical constructs derived from the Health Action Process Approach, Rothman's theory of maintenance, and Verplanken's approach to habitual behavior predicted maintenance of attendance at walking groups. 114 participants, who had already attended walking groups in the community for at least 3 months, completed a questionnaire assessing theoretical constructs regarding maintenance. An objective assessment of attendance over the subsequent 3 months was gained. Multilevel modeling was used to predict maintenance, controlling for clustering within walking groups. Recovery self-efficacy predicted maintenance, even after accounting for clustering. Satisfaction with social outcomes, satisfaction with health outcomes, and overall satisfaction predicted maintenance, but only satisfaction with health outcomes significantly predicted maintenance after accounting for clustering. Self-reported habitual behavior did not predict maintenance despite mean previous attendance being 20.7 months. Recovery self-efficacy, and satisfaction with health outcomes of walking group attendance appeared to be important for objectively measured maintenance, whereas self-reported habit appeared not to be important for maintenance at walking groups. The findings suggest that there is a need for intervention studies to boost recovery self-efficacy and satisfaction with outcomes of walking group attendance, to assess impact on maintenance.

  15. Utilization of BIM for automation of quantity takeoffs and cost estimation in transport infrastructure construction projects in the Czech Republic

    Science.gov (United States)

    Vitásek, Stanislav; Matějka, Petr

    2017-09-01

    The article deals with problematic parts of automated processing of quantity takeoff (QTO) from data generated in BIM model. It focuses on models of road constructions, and uses volumes and dimensions of excavation work to create an estimate of construction costs. The article uses a case study and explorative methods to discuss possibilities and problems of data transfer from a model to a price system of construction production when such transfer is used for price estimates of construction works. Current QTOs and price tenders are made with 2D documents. This process is becoming obsolete because more modern tools can be used. The BIM phenomenon enables partial automation in processing volumes and dimensions of construction units and matching the data to units in a given price scheme. Therefore price of construction can be estimated and structured without lengthy and often imprecise manual calculations. The use of BIM for QTO is highly dependent on local market budgeting systems, therefore proper push/pull strategy is required. It also requires proper requirements specification, compatible pricing database and software.

  16. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  17. Accounting for the inaccuracies in demand forecasts and construction cost estimations in transport project evaluation

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2014-01-01

    For decades researchers have claimedthat particularly demand forecasts and construction cost estimations are assigned with/affected by a large degree of uncertainty. Massively, articles,research documents and reports agree that there exists a tendencytowards underestimating the costs...... in demand and cost estimations and hence the evaluation of transport infrastructure projects. Currently, research within this area is scarce and scattered with no commonagreement on how to embed and operationalise the huge amount of empiricaldata that exist within the frame of Optimism Bias. Therefore...... convertingdeterministic benefit-cost ratios (BCRs) into stochasticinterval results. A new data collection (2009–2013) forms the empirical basis for any risk simulation embeddedwithin the so-calledUP database (UNITE project database),revealing the inaccuracy of both construction costs and demandforecasts. Accordingly...

  18. Estimating the decomposition of predictive information in multivariate systems

    Science.gov (United States)

    Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele

    2015-03-01

    In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.

  19. Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1993-01-01

    Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....

  20. The quality estimation of exterior wall’s and window filling’s construction design

    Science.gov (United States)

    Saltykov, Ivan; Bovsunovskaya, Maria

    2017-10-01

    The article reveals the term of “artificial envelope” in dwelling building. Authors offer a complex multifactorial approach to the design quality estimation of external fencing structures, which is based on various parameters impact. These referred parameters are: functional, exploitation, cost, and also, the environmental index is among them. The quality design index Qк is inputting for the complex characteristic of observed above parameters. The mathematical relation of this index from these parameters is the target function for the quality design estimation. For instance, the article shows the search of optimal variant for wall and window designs in small, middle and large square dwelling premises of economic class buildings. The graphs of target function single parameters are expressed for the three types of residual chamber’s dimensions. As a result of the showing example, there is a choice of window opening’s dimensions, which make the wall’s and window’s constructions properly correspondent to the producible complex requirements. The authors reveal the comparison of recommended window filling’s square in accordance with the building standards, and the square, due to the finding of the optimal variant of the design quality index. The multifactorial approach for optimal design searching, which is mentioned in this article, can be used in consideration of various construction elements of dwelling buildings in accounting of suitable climate, social and economic construction area features.

  1. Predictive analysis and mapping of indoor radon concentrations in a complex environment using kernel estimation: An application to Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Kropat, Georg, E-mail: georg.kropat@chuv.ch [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Bochud, Francois [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Jaboyedoff, Michel [Faculty of Geosciences and Environment, University of Lausanne, GEOPOLIS — 3793, 1015 Lausanne (Switzerland); Laedermann, Jean-Pascal [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Murith, Christophe; Palacios, Martha [Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland); Baechler, Sébastien [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland)

    2015-02-01

    Purpose: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. Methods: We looked at about 240 000 IRC measurements carried out in about 150 000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m{sup 3}. Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. Results: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. Conclusions: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements

  2. Rock mass evaluation for predicting tunnel constructability in the preliminary investigation stage. Phenomena causing difficult tunneling and rockburst prediction

    International Nuclear Information System (INIS)

    Shin, Koichi; Sawada, Masataka; Inohara, Yoshiki; Shidahara, Takumi; Hatano, Teruyoshi

    2011-01-01

    For the selection of the Detailed Investigation Areas for HLW disposal, predicting the tunnel constructability is one of the requirements together with assessing long-term safety. This report is the 1st of the three papers dealing with the evaluation of tunnel constructability. This paper deals with the geological factors relating to difficult tunneling such as squeezing, rockburst, and others. Also it deals with the prediction of rockburst. The 2nd paper will deal with the prediction of squeezing. The 3rd paper deals with the engineering characteristics of rock mass through rock mass classification. This paper about difficult tunneling has been based upon analysis of more than 500 tunneling reports about 280 tunnel constructions. The causes of difficult tunneling are related to (1) underground water, (2) mechanical properties of the rock, or (3) others such as gas. The geological factors for excessive water inflow are porous volcanic product of Quarternary, fault crush zone and hydrothermally altered zone of Green Tuff area, and degenerated mixed rock in accretionary complex. The geological factors for squeezing are solfataric clay at Quarternary volcanic zone, fault crush zone and hydrothermally altered zone of Green Tuff area, mudstone and fault crush zone of sedimentary rock of Neogene and later. Information useful for predicting rockburst has been gathered from previous reports. In the preliminary investigation stage, geological survey, geophysical survey and borehole survey from the surface are the source of information. Therefore rock type, P-wave velocity from seismic exploration and in-situ rock stress from hydrofracturing have been considered. Majority of rockburst events occurred at granitic rock, excluding coal mine where different kind of rockburst occurred at pillars. And P-wave velocity was around 5 km/s at the rock of rockburst events. Horizontal maximum and minimum stresses SH and Sh have been tested as a criterion for rockburst. It has been

  3. Predicting Software Projects Cost Estimation Based on Mining Historical Data

    OpenAIRE

    Najadat, Hassan; Alsmadi, Izzat; Shboul, Yazan

    2012-01-01

    In this research, a hybrid cost estimation model is proposed to produce a realistic prediction model that takes into consideration software project, product, process, and environmental elements. A cost estimation dataset is built from a large number of open source projects. Those projects are divided into three domains: communication, finance, and game projects. Several data mining techniques are used to classify software projects in terms of their development complexity. Data mining techniqu...

  4. Construct-level predictive validity of educational attainment and intellectual aptitude tests in medical student selection: meta-regression of six UK longitudinal studies

    Science.gov (United States)

    2013-01-01

    Background Measures used for medical student selection should predict future performance during training. A problem for any selection study is that predictor-outcome correlations are known only in those who have been selected, whereas selectors need to know how measures would predict in the entire pool of applicants. That problem of interpretation can be solved by calculating construct-level predictive validity, an estimate of true predictor-outcome correlation across the range of applicant abilities. Methods Construct-level predictive validities were calculated in six cohort studies of medical student selection and training (student entry, 1972 to 2009) for a range of predictors, including A-levels, General Certificates of Secondary Education (GCSEs)/O-levels, and aptitude tests (AH5 and UK Clinical Aptitude Test (UKCAT)). Outcomes included undergraduate basic medical science and finals assessments, as well as postgraduate measures of Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK)) performance and entry in the Specialist Register. Construct-level predictive validity was calculated with the method of Hunter, Schmidt and Le (2006), adapted to correct for right-censorship of examination results due to grade inflation. Results Meta-regression analyzed 57 separate predictor-outcome correlations (POCs) and construct-level predictive validities (CLPVs). Mean CLPVs are substantially higher (.450) than mean POCs (.171). Mean CLPVs for first-year examinations, were high for A-levels (.809; CI: .501 to .935), and lower for GCSEs/O-levels (.332; CI: .024 to .583) and UKCAT (mean = .245; CI: .207 to .276). A-levels had higher CLPVs for all undergraduate and postgraduate assessments than did GCSEs/O-levels and intellectual aptitude tests. CLPVs of educational attainment measures decline somewhat during training, but continue to predict postgraduate performance. Intellectual aptitude tests have lower CLPVs than A-levels or GCSEs

  5. Validation of generic cost estimates for construction-related activities at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Simion, G.; Sciacca, F.; Claiborne, E.; Watlington, B.; Riordan, B.; McLaughlin, M.

    1988-05-01

    This report represents a validation study of the cost methodologies and quantitative factors derived in Labor Productivity Adjustment Factors and Generic Methodology for Estimating the Labor Cost Associated with the Removal of Hardware, Materials, and Structures From Nuclear Power Plants. This cost methodology was developed to support NRC analysts in determining generic estimates of removal, installation, and total labor costs for construction-related activities at nuclear generating stations. In addition to the validation discussion, this report reviews the generic cost analysis methodology employed. It also discusses each of the individual cost factors used in estimating the costs of physical modifications at nuclear power plants. The generic estimating approach presented uses the /open quotes/greenfield/close quotes/ or new plant construction installation costs compiled in the Energy Economic Data Base (EEDB) as a baseline. These baseline costs are then adjusted to account for labor productivity, radiation fields, learning curve effects, and impacts on ancillary systems or components. For comparisons of estimated vs actual labor costs, approximately four dozen actual cost data points (as reported by 14 nuclear utilities) were obtained. Detailed background information was collected on each individual data point to give the best understanding possible so that the labor productivity factors, removal factors, etc., could judiciously be chosen. This study concludes that cost estimates that are typically within 40% of the actual values can be generated by prudently using the methodologies and cost factors investigated herein

  6. A model predictive control approach combined unscented Kalman filter vehicle state estimation in intelligent vehicle trajectory tracking

    Directory of Open Access Journals (Sweden)

    Hongxiao Yu

    2015-05-01

    Full Text Available Trajectory tracking and state estimation are significant in the motion planning and intelligent vehicle control. This article focuses on the model predictive control approach for the trajectory tracking of the intelligent vehicles and state estimation of the nonlinear vehicle system. The constraints of the system states are considered when applying the model predictive control method to the practical problem, while 4-degree-of-freedom vehicle model and unscented Kalman filter are proposed to estimate the vehicle states. The estimated states of the vehicle are used to provide model predictive control with real-time control and judge vehicle stability. Furthermore, in order to decrease the cost of solving the nonlinear optimization, the linear time-varying model predictive control is used at each time step. The effectiveness of the proposed vehicle state estimation and model predictive control method is tested by driving simulator. The results of simulations and experiments show that great and robust performance is achieved for trajectory tracking and state estimation in different scenarios.

  7. Estimation of wind erosion from construction of a railway in arid Northwest China

    Directory of Open Access Journals (Sweden)

    Benli Liu

    2017-06-01

    Full Text Available A state-of-the-art wind erosion simulation model, the Wind Erosion Prediction System and the United States Environmental Protection Agency's AP 42 emission factors formula, were combined together to evaluate wind-blown dust emissions from various construction units from a railway construction project in the dry Gobi land in Northwest China. The influence of the climatic factors: temperature, precipitation, wind speed and direction, soil condition, protective measures, and construction disturbance were taken into account. Driven by daily and sub-daily climate data and using specific detailed management files, the process-based WEPS model was able to express the beginning, active, and ending phases of construction, as well as the degree of disturbance for the entire scope of a construction project. The Lanzhou-Xinjiang High-speed Railway was selected as a representative study because of the diversities of different climates, soil, and working schedule conditions that could be analyzed. Wind erosion from different working units included the building of roadbeds, bridges, plants, temporary houses, earth spoil and barrow pit areas, and vehicle transportation were calculated. The total wind erosion emissions, 7406 t, for the first construction area of section LXS-15 with a 14.877 km length was obtained for quantitative analysis. The method used is applicable for evaluating wind erosion from other complex surface disturbance projects.

  8. Construction of ground-state preserving sparse lattice models for predictive materials simulations

    Science.gov (United States)

    Huang, Wenxuan; Urban, Alexander; Rong, Ziqin; Ding, Zhiwei; Luo, Chuan; Ceder, Gerbrand

    2017-08-01

    First-principles based cluster expansion models are the dominant approach in ab initio thermodynamics of crystalline mixtures enabling the prediction of phase diagrams and novel ground states. However, despite recent advances, the construction of accurate models still requires a careful and time-consuming manual parameter tuning process for ground-state preservation, since this property is not guaranteed by default. In this paper, we present a systematic and mathematically sound method to obtain cluster expansion models that are guaranteed to preserve the ground states of their reference data. The method builds on the recently introduced compressive sensing paradigm for cluster expansion and employs quadratic programming to impose constraints on the model parameters. The robustness of our methodology is illustrated for two lithium transition metal oxides with relevance for Li-ion battery cathodes, i.e., Li2xFe2(1-x)O2 and Li2xTi2(1-x)O2, for which the construction of cluster expansion models with compressive sensing alone has proven to be challenging. We demonstrate that our method not only guarantees ground-state preservation on the set of reference structures used for the model construction, but also show that out-of-sample ground-state preservation up to relatively large supercell size is achievable through a rapidly converging iterative refinement. This method provides a general tool for building robust, compressed and constrained physical models with predictive power.

  9. SAS-macros for estimation and prediction in an model of the electricity consumption

    DEFF Research Database (Denmark)

    1998-01-01

    SAS-macros for estimation and prediction in an model of the electricity consumption'' is a large collection of SAS-macros for handling a model of the electricity consumption in the Eastern Denmark. The macros are installed at Elkraft, Ballerup.......SAS-macros for estimation and prediction in an model of the electricity consumption'' is a large collection of SAS-macros for handling a model of the electricity consumption in the Eastern Denmark. The macros are installed at Elkraft, Ballerup....

  10. Estimation of resource savings due to fly ash utilization in road construction

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Subodh; Patil, C.B. [Centre for Energy Studies, Indian Institute of Technology, New Delhi 110016 (India)

    2006-08-15

    A methodology for estimation of natural resource savings due to fly ash utilization in road construction in India is presented. Analytical expressions for the savings of various resources namely soil, stone aggregate, stone chips, sand and cement in the embankment, granular sub-base (GSB), water bound macadam (WBM) and pavement quality concrete (PQC) layers of fly ash based road formation with flexible and rigid pavements of a given geometry have been developed. The quantity of fly ash utilized in these layers of different pavements has also been quantified. In the present study, the maximum amount of resource savings is found in GSB followed by WBM and other layers of pavement. The soil quantity saved increases asymptotically with the rise in the embankment height. The results of financial analysis based on Indian fly ash based road construction cost data indicate that the savings in construction cost decrease with the lead and the investment on this alternative is found to be financially attractive only for a lead less than 60 and 90km for flexible and rigid pavements, respectively. (author)

  11. Lévy matters VI Lévy-type processes moments, construction and heat kernel estimates

    CERN Document Server

    Kühn, Franziska

    2017-01-01

    Presenting some recent results on the construction and the moments of Lévy-type processes, the focus of this volume is on a new existence theorem, which is proved using a parametrix construction. Applications range from heat kernel estimates for a class of Lévy-type processes to existence and uniqueness theorems for Lévy-driven stochastic differential equations with Hölder continuous coefficients. Moreover, necessary and sufficient conditions for the existence of moments of Lévy-type processes are studied and some estimates on moments are derived. Lévy-type processes behave locally like Lévy processes but, in contrast to Lévy processes, they are not homogeneous in space. Typical examples are processes with varying index of stability and solutions of Lévy-driven stochastic differential equations. This is the sixth volume in a subseries of the Lecture Notes in Mathematics called Lévy Matters. Each volume describes a number of important topics in the theory or applicati ons of Lévy processes and pays ...

  12. ConStruct: Improved construction of RNA consensus structures

    Directory of Open Access Journals (Sweden)

    Steger Gerhard

    2008-04-01

    Full Text Available Abstract Background Aligning homologous non-coding RNAs (ncRNAs correctly in terms of sequence and structure is an unresolved problem, due to both mathematical complexity and imperfect scoring functions. High quality alignments, however, are a prerequisite for most consensus structure prediction approaches, homology searches, and tools for phylogeny inference. Automatically created ncRNA alignments often need manual corrections, yet this manual refinement is tedious and error-prone. Results We present an extended version of CONSTRUCT, a semi-automatic, graphical tool suitable for creating RNA alignments correct in terms of both consensus sequence and consensus structure. To this purpose CONSTRUCT combines sequence alignment, thermodynamic data and various measures of covariation. One important feature is that the user is guided during the alignment correction step by a consensus dotplot, which displays all thermodynamically optimal base pairs and the corresponding covariation. Once the initial alignment is corrected, optimal and suboptimal secondary structures as well as tertiary interaction can be predicted. We demonstrate CONSTRUCT's ability to guide the user in correcting an initial alignment, and show an example for optimal secondary consensus structure prediction on very hard to align SECIS elements. Moreover we use CONSTRUCT to predict tertiary interactions from sequences of the internal ribosome entry site of CrP-like viruses. In addition we show that alignments specifically designed for benchmarking can be easily be optimized using CONSTRUCT, although they share very little sequence identity. Conclusion CONSTRUCT's graphical interface allows for an easy alignment correction based on and guided by predicted and known structural constraints. It combines several algorithms for prediction of secondary consensus structure and even tertiary interactions. The CONSTRUCT package can be downloaded from the URL listed in the Availability and

  13. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    Science.gov (United States)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  14. Prediction of gamma exposure rates in large nuclear craters

    Energy Technology Data Exchange (ETDEWEB)

    Tami, Thomas M; Day, Walter C [U.S. Army Engineer Nuclear Cratering Group, Lawrence Radiation Laboratory, Livermore, CA (United States)

    1970-05-15

    In many civil engineering applications of nuclear explosives there is the need to reenter the crater and lip area as soon as possible after the detonation to carry out conventional construction activities. These construction activities, however, must be delayed until the gamma dose rate, or exposure rate, in and around the crater decays to acceptable levels. To estimate the time of reentry for post-detonation construction activities, the exposure rate in the crater and lip areas must be predicted as a function of time after detonation. An accurate prediction permits a project planner to effectively schedule post-detonation activities.

  15. [Application of predictive model to estimate concentrations of chemical substances in the work environment].

    Science.gov (United States)

    Kupczewska-Dobecka, Małgorzata; Czerczak, Sławomir; Jakubowski, Marek; Maciaszek, Piotr; Janasik, Beata

    2010-01-01

    Based on the Estimation and Assessment of Substance Exposure (EASE) predictive model implemented into the European Union System for the Evaluation of Substances (EUSES 2.1.), the exposure to three chosen organic solvents: toluene, ethyl acetate and acetone was estimated and compared with the results of measurements in workplaces. Prior to validation, the EASE model was pretested using three exposure scenarios. The scenarios differed in the decision tree of pattern of use. Five substances were chosen for the test: 1,4-dioxane tert-methyl-butyl ether, diethylamine, 1,1,1-trichloroethane and bisphenol A. After testing the EASE model, the next step was the validation by estimating the exposure level and comparing it with the results of measurements in the workplace. We used the results of measurements of toluene, ethyl acetate and acetone concentrations in the work environment of a paint and lacquer factory, a shoe factory and a refinery. Three types of exposure scenarios, adaptable to the description of working conditions were chosen to estimate inhalation exposure. Comparison of calculated exposure to toluene, ethyl acetate and acetone with measurements in workplaces showed that model predictions are comparable with the measurement results. Only for low concentration ranges, the measured concentrations were higher than those predicted. EASE is a clear, consistent system, which can be successfully used as an additional component of inhalation exposure estimation. If the measurement data are available, they should be preferred to values estimated from models. In addition to inhalation exposure estimation, the EASE model makes it possible not only to assess exposure-related risk but also to predict workers' dermal exposure.

  16. Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates

    International Nuclear Information System (INIS)

    Trapero, Juan R.

    2016-01-01

    In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.

  17. SNP-based heritability estimates of the personality dimensions and polygenic prediction of both neuroticism and major depression: findings from CONVERGE.

    Science.gov (United States)

    Docherty, A R; Moscati, A; Peterson, R; Edwards, A C; Adkins, D E; Bacanu, S A; Bigdeli, T B; Webb, B T; Flint, J; Kendler, K S

    2016-10-25

    Biometrical genetic studies suggest that the personality dimensions, including neuroticism, are moderately heritable (~0.4 to 0.6). Quantitative analyses that aggregate the effects of many common variants have recently further informed genetic research on European samples. However, there has been limited research to date on non-European populations. This study examined the personality dimensions in a large sample of Han Chinese descent (N=10 064) from the China, Oxford, and VCU Experimental Research on Genetic Epidemiology study, aimed at identifying genetic risk factors for recurrent major depression among a rigorously ascertained cohort. Heritability of neuroticism as measured by the Eysenck Personality Questionnaire (EPQ) was estimated to be low but statistically significant at 10% (s.e.=0.03, P=0.0001). In addition to EPQ, neuroticism based on a three-factor model, data for the Big Five (BF) personality dimensions (neuroticism, openness, conscientiousness, extraversion and agreeableness) measured by the Big Five Inventory were available for controls (n=5596). Heritability estimates of the BF were not statistically significant despite high power (>0.85) to detect heritabilities of 0.10. Polygenic risk scores constructed by best linear unbiased prediction weights applied to split-half samples failed to significantly predict any of the personality traits, but polygenic risk for neuroticism, calculated with LDpred and based on predictive variants previously identified from European populations (N=171 911), significantly predicted major depressive disorder case-control status (P=0.0004) after false discovery rate correction. The scores also significantly predicted EPQ neuroticism (P=6.3 × 10 -6 ). Factor analytic results of the measures indicated that any differences in heritabilities across samples may be due to genetic variation or variation in haplotype structure between samples, rather than measurement non-invariance. Findings demonstrate that neuroticism

  18. Revealing life-history traits by contrasting genetic estimations with predictions of effective population size.

    Science.gov (United States)

    Greenbaum, Gili; Renan, Sharon; Templeton, Alan R; Bouskila, Amos; Saltz, David; Rubenstein, Daniel I; Bar-David, Shirli

    2017-12-22

    Effective population size, a central concept in conservation biology, is now routinely estimated from genetic surveys and can also be theoretically predicted from demographic, life-history, and mating-system data. By evaluating the consistency of theoretical predictions with empirically estimated effective size, insights can be gained regarding life-history characteristics and the relative impact of different life-history traits on genetic drift. These insights can be used to design and inform management strategies aimed at increasing effective population size. We demonstrated this approach by addressing the conservation of a reintroduced population of Asiatic wild ass (Equus hemionus). We estimated the variance effective size (N ev ) from genetic data (N ev =24.3) and formulated predictions for the impacts on N ev of demography, polygyny, female variance in lifetime reproductive success (RS), and heritability of female RS. By contrasting the genetic estimation with theoretical predictions, we found that polygyny was the strongest factor affecting genetic drift because only when accounting for polygyny were predictions consistent with the genetically measured N ev . The comparison of effective-size estimation and predictions indicated that 10.6% of the males mated per generation when heritability of female RS was unaccounted for (polygyny responsible for 81% decrease in N ev ) and 19.5% mated when female RS was accounted for (polygyny responsible for 67% decrease in N ev ). Heritability of female RS also affected N ev ; hf2=0.91 (heritability responsible for 41% decrease in N ev ). The low effective size is of concern, and we suggest that management actions focus on factors identified as strongly affecting Nev, namely, increasing the availability of artificial water sources to increase number of dominant males contributing to the gene pool. This approach, evaluating life-history hypotheses in light of their impact on effective population size, and contrasting

  19. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

    CSIR Research Space (South Africa)

    Kirton, A

    2010-08-01

    Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...

  20. Estimating cross-validatory predictive p-values with integrated importance sampling for disease mapping models.

    Science.gov (United States)

    Li, Longhai; Feng, Cindy X; Qiu, Shi

    2017-06-30

    An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Mortality table construction

    Science.gov (United States)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  2. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    Science.gov (United States)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  3. Prediction of embankment settlement over soft soils.

    Science.gov (United States)

    2009-06-01

    The objective of this project was to review and verify the current design procedures used by TxDOT : to estimate the total and rate of consolidation settlement in embankments constructed on soft soils. Methods : to improve the settlement predictions ...

  4. Comparison of several measure-correlate-predict models using support vector regression techniques to estimate wind power densities. A case study

    International Nuclear Information System (INIS)

    Díaz, Santiago; Carta, José A.; Matías, José M.

    2017-01-01

    Highlights: • Eight measure-correlate-predict (MCP) models used to estimate the wind power densities (WPDs) at a target site are compared. • Support vector regressions are used as the main prediction techniques in the proposed MCPs. • The most precise MCP uses two sub-models which predict wind speed and air density in an unlinked manner. • The most precise model allows to construct a bivariable (wind speed and air density) WPD probability density function. • MCP models trained to minimise wind speed prediction error do not minimise WPD prediction error. - Abstract: The long-term annual mean wind power density (WPD) is an important indicator of wind as a power source which is usually included in regional wind resource maps as useful prior information to identify potentially attractive sites for the installation of wind projects. In this paper, a comparison is made of eight proposed Measure-Correlate-Predict (MCP) models to estimate the WPDs at a target site. Seven of these models use the Support Vector Regression (SVR) and the eighth the Multiple Linear Regression (MLR) technique, which serves as a basis to compare the performance of the other models. In addition, a wrapper technique with 10-fold cross-validation has been used to select the optimal set of input features for the SVR and MLR models. Some of the eight models were trained to directly estimate the mean hourly WPDs at a target site. Others, however, were firstly trained to estimate the parameters on which the WPD depends (i.e. wind speed and air density) and then, using these parameters, the target site mean hourly WPDs. The explanatory features considered are different combinations of the mean hourly wind speeds, wind directions and air densities recorded in 2014 at ten weather stations in the Canary Archipelago (Spain). The conclusions that can be drawn from the study undertaken include the argument that the most accurate method for the long-term estimation of WPDs requires the execution of a

  5. Uncertainty estimation of predictions of peptides' chromatographic retention times in shotgun proteomics.

    Science.gov (United States)

    Maboudi Afkham, Heydar; Qiu, Xuanbin; The, Matthew; Käll, Lukas

    2017-02-15

    Liquid chromatography is frequently used as a means to reduce the complexity of peptide-mixtures in shotgun proteomics. For such systems, the time when a peptide is released from a chromatography column and registered in the mass spectrometer is referred to as the peptide's retention time . Using heuristics or machine learning techniques, previous studies have demonstrated that it is possible to predict the retention time of a peptide from its amino acid sequence. In this paper, we are applying Gaussian Process Regression to the feature representation of a previously described predictor E lude . Using this framework, we demonstrate that it is possible to estimate the uncertainty of the prediction made by the model. Here we show how this uncertainty relates to the actual error of the prediction. In our experiments, we observe a strong correlation between the estimated uncertainty provided by Gaussian Process Regression and the actual prediction error. This relation provides us with new means for assessment of the predictions. We demonstrate how a subset of the peptides can be selected with lower prediction error compared to the whole set. We also demonstrate how such predicted standard deviations can be used for designing adaptive windowing strategies. lukas.kall@scilifelab.se. Our software and the data used in our experiments is publicly available and can be downloaded from https://github.com/statisticalbiotechnology/GPTime . © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    Science.gov (United States)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  7. Simultaneous discovery, estimation and prediction analysis of complex traits using a bayesian mixture model.

    Directory of Open Access Journals (Sweden)

    Gerhard Moser

    2015-04-01

    Full Text Available Gene discovery, estimation of heritability captured by SNP arrays, inference on genetic architecture and prediction analyses of complex traits are usually performed using different statistical models and methods, leading to inefficiency and loss of power. Here we use a Bayesian mixture model that simultaneously allows variant discovery, estimation of genetic variance explained by all variants and prediction of unobserved phenotypes in new samples. We apply the method to simulated data of quantitative traits and Welcome Trust Case Control Consortium (WTCCC data on disease and show that it provides accurate estimates of SNP-based heritability, produces unbiased estimators of risk in new samples, and that it can estimate genetic architecture by partitioning variation across hundreds to thousands of SNPs. We estimated that, depending on the trait, 2,633 to 9,411 SNPs explain all of the SNP-based heritability in the WTCCC diseases. The majority of those SNPs (>96% had small effects, confirming a substantial polygenic component to common diseases. The proportion of the SNP-based variance explained by large effects (each SNP explaining 1% of the variance varied markedly between diseases, ranging from almost zero for bipolar disorder to 72% for type 1 diabetes. Prediction analyses demonstrate that for diseases with major loci, such as type 1 diabetes and rheumatoid arthritis, Bayesian methods outperform profile scoring or mixed model approaches.

  8. Adaptive Model Predictive Vibration Control of a Cantilever Beam with Real-Time Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Gergely Takács

    2014-01-01

    Full Text Available This paper presents an adaptive-predictive vibration control system using extended Kalman filtering for the joint estimation of system states and model parameters. A fixed-free cantilever beam equipped with piezoceramic actuators serves as a test platform to validate the proposed control strategy. Deflection readings taken at the end of the beam have been used to reconstruct the position and velocity information for a second-order state-space model. In addition to the states, the dynamic system has been augmented by the unknown model parameters: stiffness, damping constant, and a voltage/force conversion constant, characterizing the actuating effect of the piezoceramic transducers. The states and parameters of this augmented system have been estimated in real time, using the hybrid extended Kalman filter. The estimated model parameters have been applied to define the continuous state-space model of the vibrating system, which in turn is discretized for the predictive controller. The model predictive control algorithm generates state predictions and dual-mode quadratic cost prediction matrices based on the updated discrete state-space models. The resulting cost function is then minimized using quadratic programming to find the sequence of optimal but constrained control inputs. The proposed active vibration control system is implemented and evaluated experimentally to investigate the viability of the control method.

  9. Estimation of building-related construction and demolition waste in Shanghai.

    Science.gov (United States)

    Ding, Tao; Xiao, Jianzhuang

    2014-11-01

    One methodology is proposed to estimate the quantification and composition of building-related construction and demolition (C&D) waste in a fast developing region like Shanghai, PR China. The varieties of structure types and building waste intensities due to the requirement of progressive building design and structure codes in different decades are considered in this regional C&D waste estimation study. It is concluded that approximately 13.71 million tons of C&D waste was generated in 2012 in Shanghai, of which more than 80% of this C&D waste was concrete, bricks and blocks. Analysis from this study can be applied to facilitate C&D waste governors and researchers the duty of formulating precise policies and specifications. As a matter of fact, at least a half of the enormous amount of C&D waste could be recycled if implementing proper recycling technologies and measures. The appropriate managements would be economically and environmentally beneficial to Shanghai where the per capita per year output of C&D waste has been as high as 842 kg in 2010. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Directory of Open Access Journals (Sweden)

    Michael F Sloma

    2017-11-01

    Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  11. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Science.gov (United States)

    Sloma, Michael F; Mathews, David H

    2017-11-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  12. Estimation and prediction of convection-diffusion-reaction systems from point measurement

    NARCIS (Netherlands)

    Vries, D.

    2008-01-01

    Different procedures with respect to estimation and prediction of systems characterized by convection, diffusion and reactions on the basis of point measurement data, have been studied. Two applications of these convection-diffusion-reaction (CDR) systems have been used as a case study of the

  13. Partial correlation matrix estimation using ridge penalty followed by thresholding and re-estimation.

    Science.gov (United States)

    Ha, Min Jin; Sun, Wei

    2014-09-01

    Motivated by the problem of construction of gene co-expression network, we propose a statistical framework for estimating high-dimensional partial correlation matrix by a three-step approach. We first obtain a penalized estimate of a partial correlation matrix using ridge penalty. Next we select the non-zero entries of the partial correlation matrix by hypothesis testing. Finally we re-estimate the partial correlation coefficients at these non-zero entries. In the second step, the null distribution of the test statistics derived from penalized partial correlation estimates has not been established. We address this challenge by estimating the null distribution from the empirical distribution of the test statistics of all the penalized partial correlation estimates. Extensive simulation studies demonstrate the good performance of our method. Application on a yeast cell cycle gene expression data shows that our method delivers better predictions of the protein-protein interactions than the Graphic Lasso. © 2014, The International Biometric Society.

  14. Using prediction markets to estimate the reproducibility of scientific research

    Science.gov (United States)

    Dreber, Anna; Pfeiffer, Thomas; Almenberg, Johan; Isaksson, Siri; Wilson, Brad; Chen, Yiling; Nosek, Brian A.; Johannesson, Magnus

    2015-01-01

    Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants’ individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a “statistically significant” finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications. PMID:26553988

  15. Using prediction markets to estimate the reproducibility of scientific research.

    Science.gov (United States)

    Dreber, Anna; Pfeiffer, Thomas; Almenberg, Johan; Isaksson, Siri; Wilson, Brad; Chen, Yiling; Nosek, Brian A; Johannesson, Magnus

    2015-12-15

    Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants' individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a "statistically significant" finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications.

  16. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  17. How personal resources predict work engagement and self-rated performance among construction workers: a social cognitive perspective.

    Science.gov (United States)

    Lorente, Laura; Salanova, Marisa; Martínez, Isabel M; Vera, María

    2014-06-01

    Traditionally, research focussing on psychosocial factors in the construction industry has focused mainly on the negative aspects of health and on results such as occupational accidents. This study, however, focuses on the specific relationships among the different positive psychosocial factors shared by construction workers that could be responsible for occupational well-being and outcomes such as performance. The main objective of this study was to test whether personal resources predict self-rated job performance through job resources and work engagement. Following the predictions of Bandura's Social Cognitive Theory and the motivational process of the Job Demands-Resources Model, we expect that the relationship between personal resources and performance will be fully mediated by job resources and work engagement. The sample consists of 228 construction workers. Structural equation modelling supports the research model. Personal resources (i.e. self-efficacy, mental and emotional competences) play a predicting role in the perception of job resources (i.e. job control and supervisor social support), which in turn leads to work engagement and self-rated performance. This study emphasises the crucial role that personal resources play in determining how people perceive job resources by determining the levels of work engagement and, hence, their self-rated job performance. Theoretical and practical implications are discussed. © 2014 International Union of Psychological Science.

  18. Construction and operation costs of constructed wetlands treating wastewater.

    Science.gov (United States)

    Gkika, Dimitra; Gikas, Georgios D; Tsihrintzis, Vassilios A

    2014-01-01

    Design data from nine constructed wetlands (CW) facilities of various capacities (population equivalent (PE)) are used to estimate construction and operation costs, and then to derive empirical equations relating the required facility land area and the construction cost to PE. In addition, comparisons between the costs of CW facilities based on various alternative construction materials, i.e., reinforced concrete and earth structures (covered with either high density polyethylene or clay), are presented in relation to the required area. The results show that earth structures are economically advantageous. The derived equations can be used for providing a preliminary cost estimate of CW facilities for domestic wastewater treatment.

  19. Dynamic state estimation and prediction for real-time control and operation

    NARCIS (Netherlands)

    Nguyen, P.H.; Venayagamoorthy, G.K.; Kling, W.L.; Ribeiro, P.F.

    2013-01-01

    Real-time control and operation are crucial to deal with increasing complexity of modern power systems. To effectively enable those functions, it is required a Dynamic State Estimation (DSE) function to provide accurate network state variables at the right moment and predict their trends ahead. This

  20. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  1. Interior noise analysis of a construction equipment cabin based on airborne and structure-borne noise predictions

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Hee; Hong, Suk Yoon [Seoul National University, Seoul (Korea, Republic of); Song, Jee Hun [Chonnam National University, Gwangju (Korea, Republic of); Joo, Won Ho [Hyundai Heavy Industries Co. Ltd, Ulsan (Korea, Republic of)

    2012-04-15

    Noise from construction equipment affects not only surrounding residents, but also the operators of the machines. Noise that affects drivers must be evaluated during the preliminary design stage. This paper suggests an interior noise analysis procedure for construction equipment cabins. The analysis procedure, which can be used in the preliminary design stage, was investigated for airborne and structure borne noise. The total interior noise of a cabin was predicted from the airborne noise analysis and structure-borne noise analysis. The analysis procedure consists of four steps: modeling, vibration analysis, acoustic analysis and total interior noise analysis. A mesh model of a cabin for numerical analysis was made at the modeling step. At the vibration analysis step, the mesh model was verified and modal analysis and frequency response analysis are performed. At the acoustic analysis step, the vibration results from the vibration analysis step were used as initial values for radiated noise analysis and noise reduction analysis. Finally, the total cabin interior noise was predicted using the acoustic results from the acoustic analysis step. Each step was applied to a cabin of a middle-sized excavator and verified by comparison with measured data. The cabin interior noise of a middle-sized wheel loader and a large-sized forklift were predicted using the analysis procedure of the four steps and were compared with measured data. The interior noise analysis procedure of construction equipment cabins is expected to be used during the preliminary design stage.

  2. Interior noise analysis of a construction equipment cabin based on airborne and structure-borne noise predictions

    International Nuclear Information System (INIS)

    Kim, Sung Hee; Hong, Suk Yoon; Song, Jee Hun; Joo, Won Ho

    2012-01-01

    Noise from construction equipment affects not only surrounding residents, but also the operators of the machines. Noise that affects drivers must be evaluated during the preliminary design stage. This paper suggests an interior noise analysis procedure for construction equipment cabins. The analysis procedure, which can be used in the preliminary design stage, was investigated for airborne and structure borne noise. The total interior noise of a cabin was predicted from the airborne noise analysis and structure-borne noise analysis. The analysis procedure consists of four steps: modeling, vibration analysis, acoustic analysis and total interior noise analysis. A mesh model of a cabin for numerical analysis was made at the modeling step. At the vibration analysis step, the mesh model was verified and modal analysis and frequency response analysis are performed. At the acoustic analysis step, the vibration results from the vibration analysis step were used as initial values for radiated noise analysis and noise reduction analysis. Finally, the total cabin interior noise was predicted using the acoustic results from the acoustic analysis step. Each step was applied to a cabin of a middle-sized excavator and verified by comparison with measured data. The cabin interior noise of a middle-sized wheel loader and a large-sized forklift were predicted using the analysis procedure of the four steps and were compared with measured data. The interior noise analysis procedure of construction equipment cabins is expected to be used during the preliminary design stage

  3. Dengue prediction by the web: Tweets are a useful tool for estimating and forecasting Dengue at country and city level.

    Directory of Open Access Journals (Sweden)

    Cecilia de Almeida Marques-Toledo

    2017-07-01

    Full Text Available Infectious diseases are a leading threat to public health. Accurate and timely monitoring of disease risk and progress can reduce their impact. Mentioning a disease in social networks is correlated with physician visits by patients, and can be used to estimate disease activity. Dengue is the fastest growing mosquito-borne viral disease, with an estimated annual incidence of 390 million infections, of which 96 million manifest clinically. Dengue burden is likely to increase in the future owing to trends toward increased urbanization, scarce water supplies and, possibly, environmental change. The epidemiological dynamic of Dengue is complex and difficult to predict, partly due to costly and slow surveillance systems.In this study, we aimed to quantitatively assess the usefulness of data acquired by Twitter for the early detection and monitoring of Dengue epidemics, both at country and city level at a weekly basis. Here, we evaluated and demonstrated the potential of tweets modeling for Dengue estimation and forecast, in comparison with other available web-based data, Google Trends and Wikipedia access logs. Also, we studied the factors that might influence the goodness-of-fit of the model. We built a simple model based on tweets that was able to 'nowcast', i.e. estimate disease numbers in the same week, but also 'forecast' disease in future weeks. At the country level, tweets are strongly associated with Dengue cases, and can estimate present and future Dengue cases until 8 weeks in advance. At city level, tweets are also useful for estimating Dengue activity. Our model can be applied successfully to small and less developed cities, suggesting a robust construction, even though it may be influenced by the incidence of the disease, the activity of Twitter locally, and social factors, including human development index and internet access.Tweets association with Dengue cases is valuable to assist traditional Dengue surveillance at real-time and low

  4. Dengue prediction by the web: Tweets are a useful tool for estimating and forecasting Dengue at country and city level.

    Science.gov (United States)

    Marques-Toledo, Cecilia de Almeida; Degener, Carolin Marlen; Vinhal, Livia; Coelho, Giovanini; Meira, Wagner; Codeço, Claudia Torres; Teixeira, Mauro Martins

    2017-07-01

    Infectious diseases are a leading threat to public health. Accurate and timely monitoring of disease risk and progress can reduce their impact. Mentioning a disease in social networks is correlated with physician visits by patients, and can be used to estimate disease activity. Dengue is the fastest growing mosquito-borne viral disease, with an estimated annual incidence of 390 million infections, of which 96 million manifest clinically. Dengue burden is likely to increase in the future owing to trends toward increased urbanization, scarce water supplies and, possibly, environmental change. The epidemiological dynamic of Dengue is complex and difficult to predict, partly due to costly and slow surveillance systems. In this study, we aimed to quantitatively assess the usefulness of data acquired by Twitter for the early detection and monitoring of Dengue epidemics, both at country and city level at a weekly basis. Here, we evaluated and demonstrated the potential of tweets modeling for Dengue estimation and forecast, in comparison with other available web-based data, Google Trends and Wikipedia access logs. Also, we studied the factors that might influence the goodness-of-fit of the model. We built a simple model based on tweets that was able to 'nowcast', i.e. estimate disease numbers in the same week, but also 'forecast' disease in future weeks. At the country level, tweets are strongly associated with Dengue cases, and can estimate present and future Dengue cases until 8 weeks in advance. At city level, tweets are also useful for estimating Dengue activity. Our model can be applied successfully to small and less developed cities, suggesting a robust construction, even though it may be influenced by the incidence of the disease, the activity of Twitter locally, and social factors, including human development index and internet access. Tweets association with Dengue cases is valuable to assist traditional Dengue surveillance at real-time and low-cost. Tweets are

  5. Exploring the motivation jungle: Predicting performance on a novel task by investigating constructs from different motivation perspectives in tandem

    NARCIS (Netherlands)

    Nuland, H.J.C. van; Dusseldorp, E.; Martens, R.L.; Boekaerts, M.

    2010-01-01

    Different theoretical viewpoints on motivation make it hard to decide which model has the best potential to provide valid predictions on classroom performance. This study was designed to explore motivation constructs derived from different motivation perspectives that predict performance on a novel

  6. How personal resources predict work engagement and self-rated performance among construction workers: A social cognitive perspective

    OpenAIRE

    Lorente Prieto, Laura; Salanova Soria, Marisa; Martínez Martínez, Isabel M.; Vera Perea, María

    2014-01-01

    Traditionally, research focussing on psychosocial factors in the construction industry has focused mainly on the negative aspects of health and on results such as occupational accidents. This study, however, focuses on the specific relationships among the different positive psychosocial factors shared by construction workers that could be responsible for occupational well-being and outcomes such as performance. The main objective of this study was to test whether personal resources predict se...

  7. Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.

    2011-01-01

    Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.

  8. Construction and evaluation of FiND, a fall risk prediction model of inpatients from nursing data.

    Science.gov (United States)

    Yokota, Shinichiroh; Ohe, Kazuhiko

    2016-04-01

    To construct and evaluate an easy-to-use fall risk prediction model based on the daily condition of inpatients from secondary use electronic medical record system data. The present authors scrutinized electronic medical record system data and created a dataset for analysis by including inpatient fall report data and Intensity of Nursing Care Needs data. The authors divided the analysis dataset into training data and testing data, then constructed the fall risk prediction model FiND from the training data, and tested the model using the testing data. The dataset for analysis contained 1,230,604 records from 46,241 patients. The sensitivity of the model constructed from the training data was 71.3% and the specificity was 66.0%. The verification result from the testing dataset was almost equivalent to the theoretical value. Although the model's accuracy did not surpass that of models developed in previous research, the authors believe FiND will be useful in medical institutions all over Japan because it is composed of few variables (only age, sex, and the Intensity of Nursing Care Needs items), and the accuracy for unknown data was clear. © 2016 Japan Academy of Nursing Science.

  9. IN-CYLINDER MASS FLOW ESTIMATION AND MANIFOLD PRESSURE DYNAMICS FOR STATE PREDICTION IN SI ENGINES

    Directory of Open Access Journals (Sweden)

    Wojnar Sławomir

    2014-06-01

    Full Text Available The aim of this paper is to present a simple model of the intake manifold dynamics of a spark ignition (SI engine and its possible application for estimation and control purposes. We focus on pressure dynamics, which may be regarded as the foundation for estimating future states and for designing model predictive control strategies suitable for maintaining the desired air fuel ratio (AFR. The flow rate measured at the inlet of the intake manifold and the in-cylinder flow estimation are considered as parts of the proposed model. In-cylinder flow estimation is crucial for engine control, where an accurate amount of aspired air forms the basis for computing the manipulated variables. The solutions presented here are based on the mean value engine model (MVEM approach, using the speed-density method. The proposed in-cylinder flow estimation method is compared to measured values in an experimental setting, while one-step-ahead prediction is illustrated using simulation results.

  10. Computer model for estimating electric utility environmental noise

    International Nuclear Information System (INIS)

    Teplitzky, A.M.; Hahn, K.J.

    1991-01-01

    This paper reports on a computer code for estimating environmental noise emissions from the operation and the construction of electric power plants that was developed based on algorithms. The computer code (Model) is used to predict octave band sound power levels for power plant operation and construction activities on the basis of the equipment operating characteristics and calculates off-site sound levels for each noise source and for an entire plant. Estimated noise levels are presented either as A-weighted sound level contours around the power plant or as octave band levels at user defined receptor locations. Calculated sound levels can be compared with user designated noise criteria, and the program can assist the user in analyzing alternative noise control strategies

  11. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    Science.gov (United States)

    Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.

    2015-01-01

    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error

  12. The effect of using genealogy-based haplotypes for genomic prediction.

    Science.gov (United States)

    Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt

    2013-03-06

    Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.

  13. Estimation of Costs and Durations of Construction of Urban Roads Using ANN and SVM

    Directory of Open Access Journals (Sweden)

    Igor Peško

    2017-01-01

    Full Text Available Offer preparation has always been a specific part of a building process which has significant impact on company business. Due to the fact that income greatly depends on offer’s precision and the balance between planned costs, both direct and overheads, and wished profit, it is necessary to prepare a precise offer within required time and available resources which are always insufficient. The paper presents a research of precision that can be achieved while using artificial intelligence for estimation of cost and duration in construction projects. Both artificial neural networks (ANNs and support vector machines (SVM are analysed and compared. The best SVM has shown higher precision, when estimating costs, with mean absolute percentage error (MAPE of 7.06% compared to the most precise ANNs which has achieved precision of 25.38%. Estimation of works duration has proved to be more difficult. The best MAPEs were 22.77% and 26.26% for SVM and ANN, respectively.

  14. Chapter 16 - Predictive Analytics for Comprehensive Energy Systems State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yang, Rui [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Jie [University of Texas at Dallas; Weng, Yang [Arizona State University

    2017-12-01

    Energy sustainability is a subject of concern to many nations in the modern world. It is critical for electric power systems to diversify energy supply to include systems with different physical characteristics, such as wind energy, solar energy, electrochemical energy storage, thermal storage, bio-energy systems, geothermal, and ocean energy. Each system has its own range of control variables and targets. To be able to operate such a complex energy system, big-data analytics become critical to achieve the goal of predicting energy supplies and consumption patterns, assessing system operation conditions, and estimating system states - all providing situational awareness to power system operators. This chapter presents data analytics and machine learning-based approaches to enable predictive situational awareness of the power systems.

  15. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    Science.gov (United States)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  16. Predicting railway wheel wear under uncertainty of wear coefficient, using universal kriging

    International Nuclear Information System (INIS)

    Cremona, Marzia A.; Liu, Binbin; Hu, Yang; Bruni, Stefano; Lewis, Roger

    2016-01-01

    Railway wheel wear prediction is essential for reliability and optimal maintenance strategies of railway systems. Indeed, an accurate wear prediction can have both economic and safety implications. In this paper we propose a novel methodology, based on Archard's equation and a local contact model, to forecast the volume of material worn and the corresponding wheel remaining useful life (RUL). A universal kriging estimate of the wear coefficient is embedded in our method. Exploiting the dependence of wear coefficient measurements with similar contact pressure and sliding speed, we construct a continuous wear coefficient map that proves to be more informative than the ones currently available in the literature. Moreover, this approach leads to an uncertainty analysis on the wear coefficient. As a consequence, we are able to construct wear prediction intervals that provide reasonable guidelines in practice. - Highlights: • Wear prediction is of outmost importance for reliability of railway systems. • Wear coefficient is essential in prediction through Archard's equation. • A novel methodology is developed to predict wear and RUL. • Universal kriging is used for wear coefficient and uncertainty estimation. • A simulation study and a real case application are provided.

  17. Construction of estimated flow- and load-duration curves for Kentucky using the Water Availability Tool for Environmental Resources (WATER)

    Science.gov (United States)

    Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.

    2012-01-01

    Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.

  18. Random Forests (RFs) for Estimation, Uncertainty Prediction and Interpretation of Monthly Solar Potential

    Science.gov (United States)

    Assouline, Dan; Mohajeri, Nahid; Scartezzini, Jean-Louis

    2017-04-01

    Solar energy is clean, widely available, and arguably the most promising renewable energy resource. Taking full advantage of solar power, however, requires a deep understanding of its patterns and dependencies in space and time. The recent advances in Machine Learning brought powerful algorithms to estimate the spatio-temporal variations of solar irradiance (the power per unit area received from the Sun, W/m2), using local weather and terrain information. Such algorithms include Deep Learning (e.g. Artificial Neural Networks), or kernel methods (e.g. Support Vector Machines). However, most of these methods have some disadvantages, as they: (i) are complex to tune, (ii) are mainly used as a black box and offering no interpretation on the variables contributions, (iii) often do not provide uncertainty predictions (Assouline et al., 2016). To provide a reasonable solar mapping with good accuracy, these gaps would ideally need to be filled. We present here simple steps using one ensemble learning algorithm namely, Random Forests (Breiman, 2001) to (i) estimate monthly solar potential with good accuracy, (ii) provide information on the contribution of each feature in the estimation, and (iii) offer prediction intervals for each point estimate. We have selected Switzerland as an example. Using a Digital Elevation Model (DEM) along with monthly solar irradiance time series and weather data, we build monthly solar maps for Global Horizontal Irradiance (GHI), Diffuse Horizontal Irradiance (GHI), and Extraterrestrial Irradiance (EI). The weather data include monthly values for temperature, precipitation, sunshine duration, and cloud cover. In order to explain the impact of each feature on the solar irradiance of each point estimate, we extend the contribution method (Kuz'min et al., 2011) to a regression setting. Contribution maps for all features can then be computed for each solar map. This provides precious information on the spatial variation of the features impact all

  19. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  20. Estimating the value of public construction works in Poland and Czech Republic

    Directory of Open Access Journals (Sweden)

    Edyta Plebankiewicz

    2016-07-01

    Full Text Available The article outlines the legislation concerning the methodology of estimating the value of works in Poland and the Czech Republic. In both countries it is necessary for the public investor to respect the law governing public procurement, which defines the structure of compulsory documents needed for the tender documentation, but not directly the way of their preparation. In both countries, though, there exist model proceeding schedules for the calculation of the value of a public procurement for construction works. To illustrate and compare the calculation methods a sample calculation of the procurement value is presented for a selected thermal efficiency improvement project.

  1. Disturbance estimator based predictive current control of grid-connected inverters

    OpenAIRE

    Al-Khafaji, Ahmed Samawi Ghthwan

    2013-01-01

    ABSTRACT: The work presented in my thesis considers one of the modern discrete-time control approaches based on digital signal processing methods, that have been developed to improve the performance control of grid-connected three-phase inverters. Disturbance estimator based predictive current control of grid-connected inverters is proposed. For inverter modeling with respect to the design of current controllers, we choose the d-q synchronous reference frame to make it easier to understand an...

  2. An Evaluation of Growth Models as Predictive Tools for Estimates at Completion (EAC)

    National Research Council Canada - National Science Library

    Trahan, Elizabeth N

    2009-01-01

    ...) as the Estimates at Completion (EAC). Our research evaluates the prospect of nonlinear growth modeling as an alternative to the current predictive tools used for calculating EAC, such as the Cost Performance Index (CPI...

  3. The construction of a decision tool to analyse local demand and local supply for GP care using a synthetic estimation model.

    Science.gov (United States)

    de Graaf-Ruizendaal, Willemijn A; de Bakker, Dinny H

    2013-10-27

    This study addresses the growing academic and policy interest in the appropriate provision of local healthcare services to the healthcare needs of local populations to increase health status and decrease healthcare costs. However, for most local areas information on the demand for primary care and supply is missing. The research goal is to examine the construction of a decision tool which enables healthcare planners to analyse local supply and demand in order to arrive at a better match. National sample-based medical record data of general practitioners (GPs) were used to predict the local demand for GP care based on local populations using a synthetic estimation technique. Next, the surplus or deficit in local GP supply were calculated using the national GP registry. Subsequently, a dynamic internet tool was built to present demand, supply and the confrontation between supply and demand regarding GP care for local areas and their surroundings in the Netherlands. Regression analysis showed a significant relationship between sociodemographic predictors of postcode areas and GP consultation time (F [14, 269,467] = 2,852.24; P 1,000 inhabitants in the Netherlands covering 97% of the total population. Confronting these estimated demand figures with the actual GP supply resulted in the average GP workload and the number of full-time equivalent (FTE) GP too much/too few for local areas to cover the demand for GP care. An estimated shortage of one FTE GP or more was prevalent in about 19% of the postcode areas with >1,000 inhabitants if the surrounding postcode areas were taken into consideration. Underserved areas were mainly found in rural regions. The constructed decision tool is freely accessible on the Internet and can be used as a starting point in the discussion on primary care service provision in local communities and it can make a considerable contribution to a primary care system which provides care when and where people need it.

  4. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  5. Methodology to Estimate the Quantity, Composition, and Management of Construction and Demolition Debris in the United States

    Science.gov (United States)

    This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estima...

  6. Suitability of faecal near-infrared reflectance spectroscopy (NIRS) predictions for estimating gross calorific value

    Energy Technology Data Exchange (ETDEWEB)

    De la Roza-Delgado, B.; Modroño, S.; Vicente, F.; Martínez-Fernández, A.; Soldado, A.

    2015-07-01

    A total of 220 faecal pig and poultry samples, collected from different experimental trials were employed with the aim to demonstrate the suitability of Near Infrared Reflectance Spectroscopy (NIRS) technology for estimation of gross calorific value on faeces as output products in energy balances studies. NIR spectra from dried and grounded faeces samples were analyzed using a Foss NIRSystem 6500 instrument, scanning over the wavelength range 400-2500 nm. Validation studies for quantitative analytical models were carried out to estimate the relevance of method performance associated to reference values to obtain an appropriate, accuracy and precision. The results for prediction of gross calorific value (GCV) of NIRS calibrations obtained for individual species showed high correlation coefficients comparing chemical analysis and NIRS predictions, ranged from 0.92 to 0.97 for poultry and pig. For external validation, the ratio between the standard error of cross validation (SECV) and the standard error of prediction (SEP) varied between 0.73 and 0.86 for poultry and pig respectively, indicating a sufficiently precision of calibrations. In addition a global model to estimate GCV in both species was developed and externally validated. It showed correlation coefficients of 0.99 for calibration, 0.98 for cross-validation and 0.97 for external validation. Finally, relative uncertainty was calculated for NIRS developed prediction models with the final value when applying individual NIRS species model of 1.3% and 1.5% for NIRS global prediction. This study suggests that NIRS is a suitable and accurate method for the determination of GCV in faeces, decreasing cost, timeless and for convenient handling of unpleasant samples.. (Author)

  7. Predicting Tunnel Squeezing Using Multiclass Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Yang Sun

    2018-01-01

    Full Text Available Tunnel squeezing is one of the major geological disasters that often occur during the construction of tunnels in weak rock masses subjected to high in situ stresses. It could cause shield jamming, budget overruns, and construction delays and could even lead to tunnel instability and casualties. Therefore, accurate prediction or identification of tunnel squeezing is extremely important in the design and construction of tunnels. This study presents a modified application of a multiclass support vector machine (SVM to predict tunnel squeezing based on four parameters, that is, diameter (D, buried depth (H, support stiffness (K, and rock tunneling quality index (Q. We compiled a database from the literature, including 117 case histories obtained from different countries such as India, Nepal, and Bhutan, to train the multiclass SVM model. The proposed model was validated using 8-fold cross validation, and the average error percentage was approximately 11.87%. Compared with existing approaches, the proposed multiclass SVM model yields a better performance in predictive accuracy. More importantly, one could estimate the severity of potential squeezing problems based on the predicted squeezing categories/classes.

  8. Fetal size monitoring and birth-weight prediction: a new population-based approach.

    Science.gov (United States)

    Gjessing, H K; Grøttum, P; Økland, I; Eik-Nes, S H

    2017-04-01

    To develop a complete, population-based system for ultrasound-based fetal size monitoring and birth-weight prediction for use in the second and third trimesters of pregnancy. Using 31 516 ultrasound examinations from a population-based Norwegian clinical database, we constructed fetal size charts for biparietal diameter, femur length and abdominal circumference from 24 to 42 weeks' gestation. A reference curve of median birth weight for gestational age was estimated using 45 037 birth weights. We determined how individual deviations from the expected ultrasound measures predicted individual percentage deviations from expected birth weight. The predictive quality was assessed by explained variance of birth weight and receiver-operating characteristics curves for prediction of small-for-gestational age. A curve for intrauterine estimated fetal weight was constructed. Charts were smoothed using the gamlss non-linear regression method. The population-based approach, using bias-free ultrasound gestational age, produces stable estimates of size-for-age and weight-for-age curves in the range 24-42 weeks' gestation. There is a close correspondence between percentage deviations and percentiles of birth weight by gestational age, making it easy to convert between the two. The variance of birth weight that can be 'explained' by ultrasound increases from 8% at 20 weeks up to 67% around term. Intrauterine estimated fetal weight is 0-106 g higher than median birth weight in the preterm period. The new population-based birth-weight prediction model provides a simple summary measure, the 'percentage birth-weight deviation', to be used for fetal size monitoring throughout the third trimester. Predictive quality of the model can be measured directly from the population data. The model computes both median observed birth weight and intrauterine estimated fetal weight. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2016 ISUOG. Published by John

  9. Evaluation of a real-time travel time prediction system in a freeway construction work zone : executive summary.

    Science.gov (United States)

    2001-03-01

    A real-time travel time prediction system (TIPS) was evaluated in a construction work : zone. TIPS includes changeable message signs (CMSs) displaying the travel time and : distance to the end of the work zone to motorists. The travel times displayed...

  10. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  11. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  12. A method for estimation of fatigue properties from hardness of materials through construction of expert system

    International Nuclear Information System (INIS)

    Jeon, Woo Soo; Song, Ji Ho

    2001-01-01

    An expert system for estimation of fatigue properties from simple tensile data of material is developed, considering nearly all important estimation methods proposed so far, i.e., 7 estimation methods. The expert system is developed to utilize for the case of only hardness data available. The knowledge base is constructed with production rules and frames using an expert system shell, UNIK. Forward chaining is employed as a reasoning method. The expert system has three functions including the function to update the knowledge base. The performance of the expert system is tested using the 54 ε-N curves consisting of 381 ε-N data points obtained for 22 materials. It is found that the expert system developed has excellent performance especially for steel materials, and reasonably good for aluminum alloys

  13. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  14. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Prediction models in building acoustics : introduction to the special session at Forum Acusticum 1999 in Berlin

    NARCIS (Netherlands)

    Gerretsen, E.; Nightingale, T.R.T.

    1999-01-01

    There is strong interest in being able to predict the apparent sound insulation in completed constructions so that the suitability of the construction details and materials may be assessed at the design stage. Methods do exist that provide estimates of the apparent sound insulation. An example of

  16. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    International Nuclear Information System (INIS)

    Koch, J.; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs

  17. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    Energy Technology Data Exchange (ETDEWEB)

    Koch, J. [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs.

  18. A NEW METHOD FOR PREDICTING SURVIVAL AND ESTIMATING UNCERTAINTY IN TRAUMA PATIENTS

    Directory of Open Access Journals (Sweden)

    V. G. Schetinin

    2017-01-01

    Full Text Available The Trauma and Injury Severity Score (TRISS is the current “gold” standard of screening patient’s condition for purposes of predicting survival probability. More than 40 years of TRISS practice revealed a number of problems, particularly, 1 unexplained fluctuation of predicted values caused by aggregation of screening tests, and 2 low accuracy of uncertainty intervals estimations. We developed a new method made it available for practitioners as a web calculator to reduce negative effect of factors given above. The method involves Bayesian methodology of statistical inference which, being computationally expensive, in theory provides most accurate predictions. We implemented and tested this approach on a data set including 571,148 patients registered in the US National Trauma Data Bank (NTDB with 1–20 injuries. These patients were distributed over the following categories: (1 174,647 with 1 injury, (2 381,137 with 2–10 injuries, and (3 15,364 with 11–20 injuries. Survival rates in each category were 0.977, 0.953, and 0.831, respectively. The proposed method has improved prediction accuracy by 0.04%, 0.36%, and 3.64% (p-value <0.05 in the categories 1, 2, and 3, respectively. Hosmer-Lemeshow statistics showed a significant improvement of the new model calibration. The uncertainty 2σ intervals were reduced from 0.628 to 0.569 for patients of the second category and from 1.227 to 0.930 for patients of the third category, both with p-value <0.005. The new method shows the statistically significant improvement (p-value <0.05 in accuracy of predicting survival and estimating the uncertainty intervals. The largest improvement has been achieved for patients with 11–20 injuries. The method is available for practitioners as a web calculator http://www.traumacalc.org.

  19. Defeat and entrapment: more than meets the eye? Applying network analysis to estimate dimensions of highly correlated constructs.

    Science.gov (United States)

    Forkmann, Thomas; Teismann, Tobias; Stenzel, Jana-Sophie; Glaesmer, Heide; de Beurs, Derek

    2018-01-25

    Defeat and entrapment have been shown to be of central relevance to the development of different disorders. However, it remains unclear whether they represent two distinct constructs or one overall latent variable. One reason for the unclarity is that traditional factor analytic techniques have trouble estimating the right number of clusters in highly correlated data. In this study, we applied a novel approach based on network analysis that can deal with correlated data to establish whether defeat and entrapment are best thought of as one or multiple constructs. Explanatory graph analysis was used to estimate the number of dimensions within the 32 items that make up the defeat and entrapment scales in two samples: an online community sample of 480 participants, and a clinical sample of 147 inpatients admitted to a psychiatric hospital after a suicidal attempt or severe suicidal crisis. Confirmatory Factor analysis (CFA) was used to test whether the proposed structure fits the data. In both samples, bootstrapped exploratory graph analysis suggested that the defeat and entrapment items belonged to different dimensions. Within the entrapment items, two separate dimensions were detected, labelled internal and external entrapment. Defeat appeared to be multifaceted only in the online sample. When comparing the CFA outcomes of the one, two, three and four factor models, the one factor model was preferred. Defeat and entrapment can be viewed as distinct, yet, highly associated constructs. Thus, although replication is needed, results are in line with theories differentiating between these two constructs.

  20. EEG Estimates of Cognitive Workload and Engagement Predict Math Problem Solving Outcomes

    Science.gov (United States)

    Beal, Carole R.; Galan, Federico Cirett

    2012-01-01

    In the present study, the authors focused on the use of electroencephalography (EEG) data about cognitive workload and sustained attention to predict math problem solving outcomes. EEG data were recorded as students solved a series of easy and difficult math problems. Sequences of attention and cognitive workload estimates derived from the EEG…

  1. 874 CONSTRUCTION COST MODELS FOR HIGHRISE OFFICE ...

    African Journals Online (AJOL)

    USER

    2015-10-28

    Oct 28, 2015 ... will not only reduce the stress on estimators but also enhance the accuracy of cost estimates. The resulting 11 ... cost estimating process to work together to ..... construction estimators in Hong. Kong. Construction Management.

  2. Interrelation and independence of positive and negative psychological constructs in predicting general treatment adherence in coronary artery patients - Results from the THORESCI study.

    Science.gov (United States)

    van Montfort, Eveline; Denollet, Johan; Widdershoven, Jos; Kupper, Nina

    2016-09-01

    In cardiac patients, positive psychological factors have been associated with improved medical and psychological outcomes. The current study examined the interrelation between and independence of multiple positive and negative psychological constructs. Furthermore, the potential added predictive value of positive psychological functioning regarding the prediction of patients' treatment adherence and participation in cardiac rehabilitation (CR) was investigated. 409 percutaneous coronary intervention (PCI) patients were included (mean age = 65.6 ± 9.5; 78% male). Self-report questionnaires were administered one month post-PCI. Positive psychological constructs included positive affect (GMS) and optimism (LOT-R); negative constructs were depression (PHQ-9, BDI), anxiety (GAD-7) and negative affect (GMS). Six months post-PCI self-reported general adherence (MOS) and CR participation were determined. Factor Analysis (Oblimin rotation) revealed two components (r = − 0.56), reflecting positive and negative psychological constructs. Linear regression analyses showed that in unadjusted analyses both optimism and positive affect were associated with better general treatment adherence at six months (p psychological constructs (i.e. optimism) may be of incremental value to negative psychological constructs in predicting patients' treatment adherence. A more complete view of a patients' psychological functioning will open new avenues for treatment. Additional research is needed to investigate the relationship between positive psychological factors and other cardiac outcomes, such as cardiac events and mortality.

  3. Online available capacity prediction and state of charge estimation based on advanced data-driven algorithms for lithium iron phosphate battery

    International Nuclear Information System (INIS)

    Deng, Zhongwei; Yang, Lin; Cai, Yishan; Deng, Hao; Sun, Liu

    2016-01-01

    The key technology of a battery management system is to online estimate the battery states accurately and robustly. For lithium iron phosphate battery, the relationship between state of charge and open circuit voltage has a plateau region which limits the estimation accuracy of voltage-based algorithms. The open circuit voltage hysteresis requires advanced online identification algorithms to cope with the strong nonlinear battery model. The available capacity, as a crucial parameter, contributes to the state of charge and state of health estimation of battery, but it is difficult to predict due to comprehensive influence by temperature, aging and current rates. Aim at above problems, the ampere-hour counting with current correction and the dual adaptive extended Kalman filter algorithms are combined to estimate model parameters and state of charge. This combination presents the advantages of less computation burden and more robustness. Considering the influence of temperature and degradation, the data-driven algorithm namely least squares support vector machine is implemented to predict the available capacity. The state estimation and capacity prediction methods are coupled to improve the estimation accuracy at different temperatures among the lifetime of battery. The experiment results verify the proposed methods have excellent state and available capacity estimation accuracy. - Highlights: • A dual adaptive extended Kalman filter is used to estimate parameters and states. • A correction term is introduced to consider the effect of current rates. • The least square support vector machine is used to predict the available capacity. • The experiment results verify the proposed state and capacity prediction methods.

  4. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    Science.gov (United States)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  5. Gambling score in earthquake prediction analysis

    Science.gov (United States)

    Molchan, G.; Romashkova, L.

    2011-03-01

    The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.

  6. Construction Safety Forecast for ITER

    Energy Technology Data Exchange (ETDEWEB)

    cadwallader, lee charles

    2006-11-01

    The International Thermonuclear Experimental Reactor (ITER) project is poised to begin its construction activity. This paper gives an estimate of construction safety as if the experiment was being built in the United States. This estimate of construction injuries and potential fatalities serves as a useful forecast of what can be expected for construction of such a major facility in any country. These data should be considered by the ITER International Team as it plans for safety during the construction phase. Based on average U.S. construction rates, ITER may expect a lost workday case rate of < 4.0 and a fatality count of 0.5 to 0.9 persons per year.

  7. A photogrammetric methodology for estimating construction and demolition waste composition

    International Nuclear Information System (INIS)

    Heck, H.H.; Reinhart, D.R.; Townsend, T.; Seibert, S.; Medeiros, S.; Cochran, K.; Chakrabarti, S.

    2002-01-01

    Manual sorting of construction, demolition, and renovation (C and D) waste is difficult and costly. A photogrammetric method has been developed to analyze the composition of C and D waste that eliminates the need for physical contact with the waste. The only field data collected is the weight and volume of the solid waste in the storage container and a photograph of each side of the waste pile, after it is dumped on the tipping floor. The methodology was developed and calibrated based on manual sorting studies at three different landfills in Florida, where the contents of twenty roll-off containers filled with C and D waste were sorted. The component classifications used were wood, concrete, paper products, drywall, metals, insulation, roofing, plastic, flooring, municipal solid waste, land-clearing waste, and other waste. Photographs of each side of the waste pile were taken with a digital camera and the pictures were analyzed on a computer using Photoshop software. Photoshop was used to divide the picture into eighty cells composed of ten columns and eight rows. The component distribution of each cell was estimated and results were summed to get a component distribution for the pile. Two types of distribution factors were developed that allow the component volumes and weights to be estimated. One set of distribution factors was developed to correct the volume distributions and the second set was developed to correct the weight distributions. The bulk density of each of the waste components were determined and used to convert waste volumes to weights. (author)

  8. A photogrammetric methodology for estimating construction and demolition waste composition

    Energy Technology Data Exchange (ETDEWEB)

    Heck, H.H. [Florida Inst. of Technology, Dept. of divil Engineering, Melbourne, Florida (United States); Reinhart, D.R.; Townsend, T.; Seibert, S.; Medeiros, S.; Cochran, K.; Chakrabarti, S

    2002-06-15

    Manual sorting of construction, demolition, and renovation (C and D) waste is difficult and costly. A photogrammetric method has been developed to analyze the composition of C and D waste that eliminates the need for physical contact with the waste. The only field data collected is the weight and volume of the solid waste in the storage container and a photograph of each side of the waste pile, after it is dumped on the tipping floor. The methodology was developed and calibrated based on manual sorting studies at three different landfills in Florida, where the contents of twenty roll-off containers filled with C and D waste were sorted. The component classifications used were wood, concrete, paper products, drywall, metals, insulation, roofing, plastic, flooring, municipal solid waste, land-clearing waste, and other waste. Photographs of each side of the waste pile were taken with a digital camera and the pictures were analyzed on a computer using Photoshop software. Photoshop was used to divide the picture into eighty cells composed of ten columns and eight rows. The component distribution of each cell was estimated and results were summed to get a component distribution for the pile. Two types of distribution factors were developed that allow the component volumes and weights to be estimated. One set of distribution factors was developed to correct the volume distributions and the second set was developed to correct the weight distributions. The bulk density of each of the waste components were determined and used to convert waste volumes to weights. (author)

  9. New measure of insulin sensitivity predicts cardiovascular disease better than HOMA estimated insulin resistance.

    Directory of Open Access Journals (Sweden)

    Kavita Venkataraman

    Full Text Available CONTEXT: Accurate assessment of insulin sensitivity may better identify individuals at increased risk of cardio-metabolic diseases. OBJECTIVES: To examine whether a combination of anthropometric, biochemical and imaging measures can better estimate insulin sensitivity index (ISI and provide improved prediction of cardio-metabolic risk, in comparison to HOMA-IR. DESIGN AND PARTICIPANTS: Healthy male volunteers (96 Chinese, 80 Malay, 77 Indian, 21 to 40 years, body mass index 18-30 kg/m(2. Predicted ISI (ISI-cal was generated using 45 randomly selected Chinese through stepwise multiple linear regression, and validated in the rest using non-parametric correlation (Kendall's tau τ. In an independent longitudinal cohort, ISI-cal and HOMA-IR were compared for prediction of diabetes and cardiovascular disease (CVD, using ROC curves. SETTING: The study was conducted in a university academic medical centre. OUTCOME MEASURES: ISI measured by hyperinsulinemic euglycemic glucose clamp, along with anthropometric measurements, biochemical assessment and imaging; incident diabetes and CVD. RESULTS: A combination of fasting insulin, serum triglycerides and waist-to-hip ratio (WHR provided the best estimate of clamp-derived ISI (adjusted R(2 0.58 versus 0.32 HOMA-IR. In an independent cohort, ROC areas under the curve were 0.77±0.02 ISI-cal versus 0.76±0.02 HOMA-IR (p>0.05 for incident diabetes, and 0.74±0.03 ISI-cal versus 0.61±0.03 HOMA-IR (p<0.001 for incident CVD. ISI-cal also had greater sensitivity than defined metabolic syndrome in predicting CVD, with a four-fold increase in the risk of CVD independent of metabolic syndrome. CONCLUSIONS: Triglycerides and WHR, combined with fasting insulin levels, provide a better estimate of current insulin resistance state and improved identification of individuals with future risk of CVD, compared to HOMA-IR. This may be useful for estimating insulin sensitivity and cardio-metabolic risk in clinical and

  10. Construction labor productivity during nuclear power plant construction

    International Nuclear Information System (INIS)

    Murray, W.B.

    1980-01-01

    This paper discusses the three different types of productivity programs used at the Wm. H. Zimmer Nuclear Power Station construction site. The Standard Cost Estimate as Productivity Measurement compares actual units installed to estimated units. The Manpower and Equipment Utilization Study measures the present utilization level of the construction work force, identifies opportunities for productivity improvement, and establishes a data base against which future improvements could be made. The special productivity program is a specialized and detailed study of first line supervision. Productivity is defined as the degree of efficiency attained in the use of labor, professional and management skills and knowledge, materials and equipment, and time and money to produce an end result. It is concluded that a more consistent system of productivity measurements needs to be developed and promoted for general use in the construction industry

  11. Regression methodology in groundwater composition estimation with composition predictions for Romuvaara borehole KR10

    Energy Technology Data Exchange (ETDEWEB)

    Luukkonen, A.; Korkealaakso, J.; Pitkaenen, P. [VTT Communities and Infrastructure, Espoo (Finland)

    1997-11-01

    Teollisuuden Voima Oy selected five investigation areas for preliminary site studies (1987Ae1992). The more detailed site investigation project, launched at the beginning of 1993 and presently supervised by Posiva Oy, is concentrated to three investigation areas. Romuvaara at Kuhmo is one of the present target areas, and the geochemical, structural and hydrological data used in this study are extracted from there. The aim of the study is to develop suitable methods for groundwater composition estimation based on a group of known hydrogeological variables. The input variables used are related to the host type of groundwater, hydrological conditions around the host location, mixing potentials between different types of groundwater, and minerals equilibrated with the groundwater. The output variables are electrical conductivity, Ca, Mg, Mn, Na, K, Fe, Cl, S, HS, SO{sub 4}, alkalinity, {sup 3}H, {sup 14}C, {sup 13}C, Al, Sr, F, Br and I concentrations, and pH of the groundwater. The methodology is to associate the known hydrogeological conditions (i.e. input variables), with the known water compositions (output variables), and to evaluate mathematical relations between these groups. Output estimations are done with two separate procedures: partial least squares regressions on the principal components of input variables, and by training neural networks with input-output pairs. Coefficients of linear equations and trained networks are optional methods for actual predictions. The quality of output predictions are monitored with confidence limit estimations, evaluated from input variable covariances and output variances, and with charge balance calculations. Groundwater compositions in Romuvaara borehole KR10 are predicted at 10 metre intervals with both prediction methods. 46 refs.

  12. Meta-analysis of choice set generation effects on route choice model estimates and predictions

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo

    2012-01-01

    are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...

  13. GNSS global real-time augmentation positioning: Real-time precise satellite clock estimation, prototype system construction and performance analysis

    Science.gov (United States)

    Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang

    2018-01-01

    Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm

  14. Fuzzy Sliding Mode Observer with Grey Prediction for the Estimation of the State-of-Charge of a Lithium-Ion Battery

    Directory of Open Access Journals (Sweden)

    Daehyun Kim

    2015-11-01

    Full Text Available We propose a state-of-charge (SOC estimation method for Li-ion batteries that combines a fuzzy sliding mode observer (FSMO with grey prediction. Unlike the existing methods based on a conventional first-order sliding mode observer (SMO and an adaptive gain SMO, the proposed method eliminates chattering in SOC estimation. In this method, which uses a fuzzy inference system, the gains of the SMO are adjusted according to the predicted future error and present estimation error of the terminal voltage. To forecast the future error value, a one-step-ahead terminal voltage prediction is obtained using a grey predictor. The proposed estimation method is validated through two types of discharge tests (a pulse discharge test and a random discharge test. The SOC estimation results are compared to the results of the conventional first-order SMO-based and the adaptive gain SMO-based methods. The experimental results show that the proposed method not only reduces chattering, but also improves estimation accuracy.

  15. Labor productivity adjustment factors. A method for estimating labor construction costs associated with physical modifications to nuclear power plants

    International Nuclear Information System (INIS)

    Riordan, B.J.

    1986-03-01

    This report develops quantitative labor productivity adjustment factors for the performance of regulatory impact analyses (RIAs). These factors will allow analysts to modify ''new construction'' labor costs to account for changes in labor productivity due to differing work environments at operating reactors and at reactors with construction in progress. The technique developed in this paper relies on the Energy Economic Data Base (EEDB) for baseline estimates of the direct labor hours and/or labor costs required to perform specific tasks in a new construction environment. The labor productivity cost factors adjust for constraining conditions such as working in a radiation environment, poor access, congestion and interference, etc., which typically occur on construction tasks at operating reactors and can occur under certain circumstances at reactors under construction. While the results do not portray all aspects of labor productivity, they encompass the major work place conditions generally discernible by the NRC analysts and assign values that appear to be reasonable within the context of industry experience. 18 refs

  16. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    Science.gov (United States)

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  17. Construction and evaluation of yeast expression networks by database-guided predictions

    Directory of Open Access Journals (Sweden)

    Katharina Papsdorf

    2016-05-01

    Full Text Available DNA-Microarrays are powerful tools to obtain expression data on the genome-wide scale. We performed microarray experiments to elucidate the transcriptional networks, which are up- or down-regulated in response to the expression of toxic polyglutamine proteins in yeast. Such experiments initially generate hit lists containing differentially expressed genes. To look into transcriptional responses, we constructed networks from these genes. We therefore developed an algorithm, which is capable of dealing with very small numbers of microarrays by clustering the hits based on co-regulatory relationships obtained from the SPELL database. Here, we evaluate this algorithm according to several criteria and further develop its statistical capabilities. Initially, we define how the number of SPELL-derived co-regulated genes and the number of input hits influences the quality of the networks. We then show the ability of our networks to accurately predict further differentially expressed genes. Including these predicted genes into the networks improves the network quality and allows quantifying the predictive strength of the networks based on a newly implemented scoring method. We find that this approach is useful for our own experimental data sets and also for many other data sets which we tested from the SPELL microarray database. Furthermore, the clusters obtained by the described algorithm greatly improve the assignment to biological processes and transcription factors for the individual clusters. Thus, the described clustering approach, which will be available through the ClusterEx web interface, and the evaluation parameters derived from it represent valuable tools for the fast and informative analysis of yeast microarray data.

  18. Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes

    OpenAIRE

    Corradi, Valentina; Swanson, Norman R.

    2005-01-01

    Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting amongst multiple alternative forecasting models, all of which are possibl...

  19. Development of technique for estimating primary cooling system break diameter in predicting nuclear emergency event sequence

    International Nuclear Information System (INIS)

    Tatebe, Yasumasa; Yoshida, Yoshitaka

    2012-01-01

    If an emergency event occurs in a nuclear power plant, appropriate action is selected and taken in accordance with the plant status, which changes from time to time, in order to prevent escalation and mitigate the event consequences. It is thus important to predict the event sequence and identify the plant behavior resulting from the action taken. In predicting the event sequence during a loss-of-coolant accident (LOCA), it is necessary to estimate break diameter. The conventional method for this estimation is time-consuming, since it involves multiple sensitivity analyses to determine the break diameter that is consistent with the plant behavior. To speed up the process of predicting the nuclear emergency event sequence, a new break diameter estimation technique that is applicable to pressurized water reactors was developed in this study. This technique enables the estimation of break diameter using the plant data sent from the safety parameter display system (SPDS), with focus on the depressurization rate in the reactor cooling system (RCS) during LOCA. The results of LOCA analysis, performed by varying the break diameter using the MAAP4 and RELAP5/MOD3.2 codes, confirmed that the RCS depressurization rate could be expressed by the log linear function of break diameter, except in the case of a small leak, in which RCS depressurization is affected by the coolant charging system and the high-pressure injection system. A correlation equation for break diameter estimation was developed from this function and tested for accuracy. Testing verified that the correlation equation could estimate break diameter accurately within an error of approximately 16%, even if the leak increases gradually, changing the plant status. (author)

  20. Liver stiffness value-based risk estimation of late recurrence after curative resection of hepatocellular carcinoma: development and validation of a predictive model.

    Directory of Open Access Journals (Sweden)

    Kyu Sik Jung

    Full Text Available Preoperative liver stiffness (LS measurement using transient elastography (TE is useful for predicting late recurrence after curative resection of hepatocellular carcinoma (HCC. We developed and validated a novel LS value-based predictive model for late recurrence of HCC.Patients who were due to undergo curative resection of HCC between August 2006 and January 2010 were prospectively enrolled and TE was performed prior to operations by study protocol. The predictive model of late recurrence was constructed based on a multiple logistic regression model. Discrimination and calibration were used to validate the model.Among a total of 139 patients who were finally analyzed, late recurrence occurred in 44 patients, with a median follow-up of 24.5 months (range, 12.4-68.1. We developed a predictive model for late recurrence of HCC using LS value, activity grade II-III, presence of multiple tumors, and indocyanine green retention rate at 15 min (ICG R15, which showed fairly good discrimination capability with an area under the receiver operating characteristic curve (AUROC of 0.724 (95% confidence intervals [CIs], 0.632-0.816. In the validation, using a bootstrap method to assess discrimination, the AUROC remained largely unchanged between iterations, with an average AUROC of 0.722 (95% CIs, 0.718-0.724. When we plotted a calibration chart for predicted and observed risk of late recurrence, the predicted risk of late recurrence correlated well with observed risk, with a correlation coefficient of 0.873 (P<0.001.A simple LS value-based predictive model could estimate the risk of late recurrence in patients who underwent curative resection of HCC.

  1. Number Line Estimation Predicts Mathematical Skills: Difference in Grades 2 and 4.

    Science.gov (United States)

    Zhu, Meixia; Cai, Dan; Leung, Ada W S

    2017-01-01

    Studies have shown that number line estimation is important for learning. However, it is yet unclear if number line estimation predicts different mathematical skills in different grades after controlling for age, non-verbal cognitive ability, attention, and working memory. The purpose of this study was to examine the role of number line estimation on two mathematical skills (calculation fluency and math problem-solving) in grade 2 and grade 4. One hundred and forty-eight children from Shanghai, China were assessed on measures of number line estimation, non-verbal cognitive ability (non-verbal matrices), working memory (N-back), attention (expressive attention), and mathematical skills (calculation fluency and math problem-solving). The results showed that in grade 2, number line estimation correlated significantly with calculation fluency ( r = -0.27, p problem-solving ( r = -0.52, p problem-solving ( r = -0.38, p problem-solving (12.0%) and calculation fluency (4.0%) after controlling for the effects of age, non-verbal cognitive ability, attention, and working memory. In grade 4, number line estimation accounted for unique variance in math problem-solving (9.0%) but not in calculation fluency. These findings suggested that number line estimation had an important role in math problem-solving for both grades 2 and 4 children and in calculation fluency for grade 2 children. We concluded that number line estimation could be a useful indicator for teachers to identify and improve children's mathematical skills.

  2. The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.

    Science.gov (United States)

    Nevill, Alan M; Cooke, Carlton B

    2017-05-01

    This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.

  3. New measure of insulin sensitivity predicts cardiovascular disease better than HOMA estimated insulin resistance.

    Science.gov (United States)

    Venkataraman, Kavita; Khoo, Chin Meng; Leow, Melvin K S; Khoo, Eric Y H; Isaac, Anburaj V; Zagorodnov, Vitali; Sadananthan, Suresh A; Velan, Sendhil S; Chong, Yap Seng; Gluckman, Peter; Lee, Jeannette; Salim, Agus; Tai, E Shyong; Lee, Yung Seng

    2013-01-01

    Accurate assessment of insulin sensitivity may better identify individuals at increased risk of cardio-metabolic diseases. To examine whether a combination of anthropometric, biochemical and imaging measures can better estimate insulin sensitivity index (ISI) and provide improved prediction of cardio-metabolic risk, in comparison to HOMA-IR. Healthy male volunteers (96 Chinese, 80 Malay, 77 Indian), 21 to 40 years, body mass index 18-30 kg/m(2). Predicted ISI (ISI-cal) was generated using 45 randomly selected Chinese through stepwise multiple linear regression, and validated in the rest using non-parametric correlation (Kendall's tau τ). In an independent longitudinal cohort, ISI-cal and HOMA-IR were compared for prediction of diabetes and cardiovascular disease (CVD), using ROC curves. The study was conducted in a university academic medical centre. ISI measured by hyperinsulinemic euglycemic glucose clamp, along with anthropometric measurements, biochemical assessment and imaging; incident diabetes and CVD. A combination of fasting insulin, serum triglycerides and waist-to-hip ratio (WHR) provided the best estimate of clamp-derived ISI (adjusted R(2) 0.58 versus 0.32 HOMA-IR). In an independent cohort, ROC areas under the curve were 0.77±0.02 ISI-cal versus 0.76±0.02 HOMA-IR (p>0.05) for incident diabetes, and 0.74±0.03 ISI-cal versus 0.61±0.03 HOMA-IR (pHOMA-IR. This may be useful for estimating insulin sensitivity and cardio-metabolic risk in clinical and epidemiological settings.

  4. Nonlinear estimation and control of automotive drivetrains

    CERN Document Server

    Chen, Hong

    2014-01-01

    Nonlinear Estimation and Control of Automotive Drivetrains discusses the control problems involved in automotive drivetrains, particularly in hydraulic Automatic Transmission (AT), Dual Clutch Transmission (DCT) and Automated Manual Transmission (AMT). Challenging estimation and control problems, such as driveline torque estimation and gear shift control, are addressed by applying the latest nonlinear control theories, including constructive nonlinear control (Backstepping, Input-to-State Stable) and Model Predictive Control (MPC). The estimation and control performance is improved while the calibration effort is reduced significantly. The book presents many detailed examples of design processes and thus enables the readers to understand how to successfully combine purely theoretical methodologies with actual applications in vehicles. The book is intended for researchers, PhD students, control engineers and automotive engineers. Hong Chen is a professor at the State Key Laboratory of Automotive Simulation and...

  5. GRIP: A web-based system for constructing Gold Standard datasets for protein-protein interaction prediction

    Directory of Open Access Journals (Sweden)

    Zheng Huiru

    2009-01-01

    Full Text Available Abstract Background Information about protein interaction networks is fundamental to understanding protein function and cellular processes. Interaction patterns among proteins can suggest new drug targets and aid in the design of new therapeutic interventions. Efforts have been made to map interactions on a proteomic-wide scale using both experimental and computational techniques. Reference datasets that contain known interacting proteins (positive cases and non-interacting proteins (negative cases are essential to support computational prediction and validation of protein-protein interactions. Information on known interacting and non interacting proteins are usually stored within databases. Extraction of these data can be both complex and time consuming. Although, the automatic construction of reference datasets for classification is a useful resource for researchers no public resource currently exists to perform this task. Results GRIP (Gold Reference dataset constructor from Information on Protein complexes is a web-based system that provides researchers with the functionality to create reference datasets for protein-protein interaction prediction in Saccharomyces cerevisiae. Both positive and negative cases for a reference dataset can be extracted, organised and downloaded by the user. GRIP also provides an upload facility whereby users can submit proteins to determine protein complex membership. A search facility is provided where a user can search for protein complex information in Saccharomyces cerevisiae. Conclusion GRIP is developed to retrieve information on protein complex, cellular localisation, and physical and genetic interactions in Saccharomyces cerevisiae. Manual construction of reference datasets can be a time consuming process requiring programming knowledge. GRIP simplifies and speeds up this process by allowing users to automatically construct reference datasets. GRIP is free to access at http://rosalind.infj.ulst.ac.uk/GRIP/.

  6. Bayesian Methods for Predicting the Shape of Chinese Yam in Terms of Key Diameters

    Directory of Open Access Journals (Sweden)

    Mitsunori Kayano

    2017-01-01

    Full Text Available This paper proposes Bayesian methods for the shape estimation of Chinese yam (Dioscorea opposita using a few key diameters of yam. Shape prediction of yam is applicable to determining optimal cutoff positions of a yam for producing seed yams. Our Bayesian method, which is a combination of Bayesian estimation model and predictive model, enables automatic, rapid, and low-cost processing of yam. After the construction of the proposed models using a sample data set in Japan, the models provide whole shape prediction of yam based on only a few key diameters. The Bayesian method performed well on the shape prediction in terms of minimizing the mean squared error between measured shape and the prediction. In particular, a multiple regression method with key diameters at two fixed positions attained the highest performance for shape prediction. We have developed automatic, rapid, and low-cost yam-processing machines based on the Bayesian estimation model and predictive model. Development of such shape prediction approaches, including our Bayesian method, can be a valuable aid in reducing the cost and time in food processing.

  7. Economic Optimization of Spray Dryer Operation using Nonlinear Model Predictive Control with State Estimation

    DEFF Research Database (Denmark)

    Petersen, Lars Norbert; Jørgensen, John Bagterp; Rawlings, James B.

    2015-01-01

    In this paper, we develop an economically optimizing Nonlinear Model Predictive Controller (E-NMPC) for a complete spray drying plant with multiple stages. In the E-NMPC the initial state is estimated by an extended Kalman Filter (EKF) with noise covariances estimated by an autocovariance least...... squares method (ALS). We present a model for the spray drying plant and use this model for simulation as well as for prediction in the E-NMPC. The open-loop optimal control problem in the E-NMPC is solved using the single-shooting method combined with a quasi-Newton Sequential Quadratic programming (SQP......) algorithm and the adjoint method for computation of gradients. We evaluate the economic performance when unmeasured disturbances are present. By simulation, we demonstrate that the E-NMPC improves the profit of spray drying by 17% compared to conventional PI control....

  8. Better estimation of protein-DNA interaction parameters improve prediction of functional sites

    Directory of Open Access Journals (Sweden)

    O'Flanagan Ruadhan A

    2008-12-01

    Full Text Available Abstract Background Characterizing transcription factor binding motifs is a common bioinformatics task. For transcription factors with variable binding sites, we need to get many suboptimal binding sites in our training dataset to get accurate estimates of free energy penalties for deviating from the consensus DNA sequence. One procedure to do that involves a modified SELEX (Systematic Evolution of Ligands by Exponential Enrichment method designed to produce many such sequences. Results We analyzed low stringency SELEX data for E. coli Catabolic Activator Protein (CAP, and we show here that appropriate quantitative analysis improves our ability to predict in vitro affinity. To obtain large number of sequences required for this analysis we used a SELEX SAGE protocol developed by Roulet et al. The sequences obtained from here were subjected to bioinformatic analysis. The resulting bioinformatic model characterizes the sequence specificity of the protein more accurately than those sequence specificities predicted from previous analysis just by using a few known binding sites available in the literature. The consequences of this increase in accuracy for prediction of in vivo binding sites (and especially functional ones in the E. coli genome are also discussed. We measured the dissociation constants of several putative CAP binding sites by EMSA (Electrophoretic Mobility Shift Assay and compared the affinities to the bioinformatics scores provided by methods like the weight matrix method and QPMEME (Quadratic Programming Method of Energy Matrix Estimation trained on known binding sites as well as on the new sites from SELEX SAGE data. We also checked predicted genome sites for conservation in the related species S. typhimurium. We found that bioinformatics scores based on SELEX SAGE data does better in terms of prediction of physical binding energies as well as in detecting functional sites. Conclusion We think that training binding site detection

  9. Determination of Constructs and Dimensions of Employability Skills Based Work Performance Prediction: A Triangular Approach

    OpenAIRE

    Rahmat, Normala; Buntat, Yahya; Ayub, Abdul Rahman

    2015-01-01

    The level of the employability skills of the graduates as determined by job role and mapped to the employability skills, which correspond to the requirement of employers, will have significant impact on the graduates’ job performance. The main objective of this study was to identify the constructs and dimensions of employability skills, which can predict the work performance of electronic polytechnic graduate in electrical and electronics industry. A triangular qualitative approach was used i...

  10. Prediction of the period of psychotic episode in individual schizophrenics by simulation-data construction approach.

    Science.gov (United States)

    Huang, Chun-Jung; Wang, Hsiao-Fan; Chiu, Hsien-Jane; Lan, Tsuo-Hung; Hu, Tsung-Ming; Loh, El-Wui

    2010-10-01

    Although schizophrenia can be treated, most patients still experience inevitable psychotic episodes from time to time. Precautious actions can be taken if the next onset can be predicted. However, sufficient information is always lacking in the clinical scenario. A possible solution is to use the virtual data generated from limited of original data. Data construction method (DCM) has been shown to generate the virtual felt earthquake data effectively and used in the prediction of further events. Here we investigated the performance of DCM in deriving the membership functions and discrete-event simulations (DES) in predicting the period embracing the initiation and termination time-points of the next psychotic episode of 35 individual schizophrenic patients. The results showed that 21 subjects had a success of simulations (RSS) ≥70%. Further analysis demonstrated that the co-morbidity of coronary heart diseases (CHD), risks of CHD, and the frequency of previous psychotic episodes increased the RSS.

  11. Estimating Body Related Soft Biometric Traits in Video Frames

    Directory of Open Access Journals (Sweden)

    Olasimbo Ayodeji Arigbabu

    2014-01-01

    Full Text Available Soft biometrics can be used as a prescreening filter, either by using single trait or by combining several traits to aid the performance of recognition systems in an unobtrusive way. In many practical visual surveillance scenarios, facial information becomes difficult to be effectively constructed due to several varying challenges. However, from distance the visual appearance of an object can be efficiently inferred, thereby providing the possibility of estimating body related information. This paper presents an approach for estimating body related soft biometrics; specifically we propose a new approach based on body measurement and artificial neural network for predicting body weight of subjects and incorporate the existing technique on single view metrology for height estimation in videos with low frame rate. Our evaluation on 1120 frame sets of 80 subjects from a newly compiled dataset shows that the mentioned soft biometric information of human subjects can be adequately predicted from set of frames.

  12. The effect of genealogy-based haplotypes on genomic prediction

    DEFF Research Database (Denmark)

    Edriss, Vahid; Fernando, Rohan L.; Su, Guosheng

    2013-01-01

    on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using...... local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (pi) of the haplotype covariates had zero effect......, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some...

  13. Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein-Ligand Residence Times.

    Science.gov (United States)

    Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea

    2016-08-11

    Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.

  14. ''RESURS'' - The Russian scientific-technical programme for NPP equipment lifetime monitoring, estimation, prediction and management

    International Nuclear Information System (INIS)

    Emelyanov, V.

    1994-01-01

    RESURS programme is described implementation of which will allow to work out regulatory-methodological basis providing legal and technical solution of NPP equipment lifetime management, prediction, monitoring and estimation problems

  15. Number Line Estimation Predicts Mathematical Skills: Difference in Grades 2 and 4

    Directory of Open Access Journals (Sweden)

    Meixia Zhu

    2017-09-01

    Full Text Available Studies have shown that number line estimation is important for learning. However, it is yet unclear if number line estimation predicts different mathematical skills in different grades after controlling for age, non-verbal cognitive ability, attention, and working memory. The purpose of this study was to examine the role of number line estimation on two mathematical skills (calculation fluency and math problem-solving in grade 2 and grade 4. One hundred and forty-eight children from Shanghai, China were assessed on measures of number line estimation, non-verbal cognitive ability (non-verbal matrices, working memory (N-back, attention (expressive attention, and mathematical skills (calculation fluency and math problem-solving. The results showed that in grade 2, number line estimation correlated significantly with calculation fluency (r = -0.27, p < 0.05 and math problem-solving (r = -0.52, p < 0.01. In grade 4, number line estimation correlated significantly with math problem-solving (r = -0.38, p < 0.01, but not with calculation fluency. Regression analyses indicated that in grade 2, number line estimation accounted for unique variance in math problem-solving (12.0% and calculation fluency (4.0% after controlling for the effects of age, non-verbal cognitive ability, attention, and working memory. In grade 4, number line estimation accounted for unique variance in math problem-solving (9.0% but not in calculation fluency. These findings suggested that number line estimation had an important role in math problem-solving for both grades 2 and 4 children and in calculation fluency for grade 2 children. We concluded that number line estimation could be a useful indicator for teachers to identify and improve children’s mathematical skills.

  16. Linear Interaction Energy Based Prediction of Cytochrome P450 1A2 Binding Affinities with Reliability Estimation.

    Directory of Open Access Journals (Sweden)

    Luigi Capoferri

    Full Text Available Prediction of human Cytochrome P450 (CYP binding affinities of small ligands, i.e., substrates and inhibitors, represents an important task for predicting drug-drug interactions. A quantitative assessment of the ligand binding affinity towards different CYPs can provide an estimate of inhibitory activity or an indication of isoforms prone to interact with the substrate of inhibitors. However, the accuracy of global quantitative models for CYP substrate binding or inhibition based on traditional molecular descriptors can be limited, because of the lack of information on the structure and flexibility of the catalytic site of CYPs. Here we describe the application of a method that combines protein-ligand docking, Molecular Dynamics (MD simulations and Linear Interaction Energy (LIE theory, to allow for quantitative CYP affinity prediction. Using this combined approach, a LIE model for human CYP 1A2 was developed and evaluated, based on a structurally diverse dataset for which the estimated experimental uncertainty was 3.3 kJ mol-1. For the computed CYP 1A2 binding affinities, the model showed a root mean square error (RMSE of 4.1 kJ mol-1 and a standard error in prediction (SDEP in cross-validation of 4.3 kJ mol-1. A novel approach that includes information on both structural ligand description and protein-ligand interaction was developed for estimating the reliability of predictions, and was able to identify compounds from an external test set with a SDEP for the predicted affinities of 4.6 kJ mol-1 (corresponding to 0.8 pKi units.

  17. Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation

    Science.gov (United States)

    Ekin Aydin, Boran; Rutten, Martine

    2016-04-01

    Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.

  18. Neural network versus activity-specific prediction equations for energy expenditure estimation in children.

    Science.gov (United States)

    Ruch, Nicole; Joss, Franziska; Jimmy, Gerda; Melzer, Katarina; Hänggi, Johanna; Mäder, Urs

    2013-11-01

    The aim of this study was to compare the energy expenditure (EE) estimations of activity-specific prediction equations (ASPE) and of an artificial neural network (ANNEE) based on accelerometry with measured EE. Forty-three children (age: 9.8 ± 2.4 yr) performed eight different activities. They were equipped with one tri-axial accelerometer that collected data in 1-s epochs and a portable gas analyzer. The ASPE and the ANNEE were trained to estimate the EE by including accelerometry, age, gender, and weight of the participants. To provide the activity-specific information, a decision tree was trained to recognize the type of activity through accelerometer data. The ASPE were applied to the activity-type-specific data recognized by the tree (Tree-ASPE). The Tree-ASPE precisely estimated the EE of all activities except cycling [bias: -1.13 ± 1.33 metabolic equivalent (MET)] and walking (bias: 0.29 ± 0.64 MET; P MET) and walking (bias: 0.61 ± 0.72 MET) and underestimated the EE of cycling (bias: -0.90 ± 1.18 MET; P MET, Tree-ASPE: 0.08 ± 0.21 MET) and walking (ANNEE 0.61 ± 0.72 MET, Tree-ASPE: 0.29 ± 0.64 MET) were significantly smaller in the Tree-ASPE than in the ANNEE (P < 0.05). The Tree-ASPE was more precise in estimating the EE than the ANNEE. The use of activity-type-specific information for subsequent EE prediction equations might be a promising approach for future studies.

  19. Correlational analysis and predictive validity of psychological constructs related with pain in fibromyalgia

    Directory of Open Access Journals (Sweden)

    Roca Miquel

    2011-01-01

    Full Text Available Abstract Background Fibromyalgia (FM is a prevalent and disabling disorder characterized by a history of widespread pain for at least three months. Pain is considered a complex experience in which affective and cognitive aspects are crucial for prognosis. The aim of this study is to assess the importance of pain-related psychological constructs on function and pain in patients with FM. Methods Design Multicentric, naturalistic, one-year follow-up study. Setting and study sample. Patients will be recruited from primary care health centres in the region of Aragon, Spain. Patients considered for inclusion are those aged 18-65 years, able to understand Spanish, who fulfil criteria for primary FM according to the American College of Rheumatology, with no previous psychological treatment. Measurements The variables measured will be the following: main variables (pain assessed with a visual analogue scale and with sphygmomanometer and general function assessed with Fibromyalgia Impact Questionnaire, and, psychological constructs (pain catastrophizing, pain acceptance, mental defeat, psychological inflexibility, perceived injustice, mindfulness, and positive and negative affect, and secondary variables (sociodemographic variables, anxiety and depression assessed with Hospital Anxiety and Depression Scale, and psychiatric interview assessed with MINI. Assessments will be carried at baseline and at one-year follow-up. Main outcome Pain Visual Analogue Scale. Analysis The existence of differences in socio-demographic, main outcome and other variables regarding pain-related psychological constructs will be analysed using Chi Square test for qualitative variables, or Student t test or variance analysis, respectively, for variables fulfilling the normality hypothesis. To assess the predictive value of pain-related psychological construct on main outcome variables at one-year follow-up, use will be made of a logistic regression analysis adjusted for socio

  20. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  1. Predicting future forestland area: a comparison of econometric approaches.

    Science.gov (United States)

    SoEun Ahn; Andrew J. Plantinga; Ralph J. Alig

    2000-01-01

    Predictions of future forestland area are an important component of forest policy analyses. In this article, we test the ability of econometric land use models to accurately forecast forest area. We construct a panel data set for Alabama consisting of county and time-series observation for the period 1964 to 1992. We estimate models using restricted data sets-namely,...

  2. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    Science.gov (United States)

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  3. Application of the predicted heat strain model in development of localized, threshold-based heat stress management guidelines for the construction industry.

    Science.gov (United States)

    Rowlinson, Steve; Jia, Yunyan Andrea

    2014-04-01

    Existing heat stress risk management guidelines recommended by international standards are not practical for the construction industry which needs site supervision staff to make instant managerial decisions to mitigate heat risks. The ability of the predicted heat strain (PHS) model [ISO 7933 (2004). Ergonomics of the thermal environment analytical determination and interpretation of heat stress using calculation of the predicted heat strain. Geneva: International Standard Organisation] to predict maximum allowable exposure time (D lim) has now enabled development of localized, action-triggering and threshold-based guidelines for implementation by lay frontline staff on construction sites. This article presents a protocol for development of two heat stress management tools by applying the PHS model to its full potential. One of the tools is developed to facilitate managerial decisions on an optimized work-rest regimen for paced work. The other tool is developed to enable workers' self-regulation during self-paced work.

  4. Evaluation of a real-time travel time prediction system in a freeway construction work zone : final report, March 2001.

    Science.gov (United States)

    2001-03-01

    A real-time travel time prediction system (TIPS) was evaluated in a construction work zone. TIPS includes changeable message signs (CMSs) displaying the travel time and distance to the end of the work zone to motorists. The travel times displayed by ...

  5. Conductor Temperature Estimation and Prediction at Thermal Transient State in Dynamic Line Rating Application

    DEFF Research Database (Denmark)

    Alvarez, David L.; Silva, Filipe Miguel Faria da; Mombello, Enrique Esteban

    2018-01-01

    . This paper presents an algorithm to estimate and predict the temperature in overhead line conductors using an Extended Kalman Filter. The proposed algorithm assumes both actual weather and current intensity flowing along the conductor as control variables. The temperature of the conductor, mechanical tension...

  6. Simultaneously estimation for surface heat fluxes of steel slab in a reheating furnace based on DMC predictive control

    International Nuclear Information System (INIS)

    Li, Yanhao; Wang, Guangjun; Chen, Hong

    2015-01-01

    The predictive control theory is utilized for the research of a simultaneous estimation of heat fluxes through the upper, side and lower surface of a steel slab in a walking beam type rolling steel reheating furnace. An inverse algorithm based on dynamic matrix control (DMC) is established. That is, each surface heat flux of a slab is simultaneously estimated through rolling optimization on the basis of temperature measurements in selected points of its interior by utilizing step response function as predictive model of a slab's temperature. The reliability of the DMC results is enhanced without prior assuming specific functions of heat fluxes over a period of future time. The inverse algorithm proposed a respective regularization to effectively improve the stability of the estimated results by considering obvious strength differences between the upper as well as lower and side surface heat fluxes of the slab. - Highlights: • The predictive control theory is adopted. • An inversion scheme based on DMC is established. • Upper, side and lower surface heat fluxes of slab are estimated based DMC. • A respective regularization is proposed to improve the stability of results

  7. Analysis of nuclear-power construction costs

    International Nuclear Information System (INIS)

    Jansma, G.L.; Borcherding, J.D.

    1988-01-01

    This paper discusses the use of regression analysis for estimating construction costs. The estimate is based on an historical data base and quantification of key factors considered external to project management. This technique is not intended as a replacement for detailed cost estimates but can provide information useful to the cost-estimating process and to top management interested in evaluating project management. The focus of this paper is the nuclear-power construction industry but the technique is applicable beyond this example. The approach and critical assumptions are also useful in a public-policy situation where utility commissions are evaluating construction in prudence reviews and making comparisons to other nuclear projects. 13 references, 2 figures

  8. eMolTox: prediction of molecular toxicity with confidence.

    Science.gov (United States)

    Ji, Changge; Svensson, Fredrik; Zoufir, Azedine; Bender, Andreas

    2018-03-07

    In this work we present eMolTox, a web server for the prediction of potential toxicity associated with a given molecule. 174 toxicology-related in vitro/vivo experimental datasets were used for model construction and Mondrian conformal prediction was used to estimate the confidence of the resulting predictions. Toxic substructure analysis is also implemented in eMolTox. eMolTox predicts and displays a wealth of information of potential molecular toxicities for safety analysis in drug development. The eMolTox Server is freely available for use on the web at http://xundrug.cn/moltox. chicago.ji@gmail.com or ab454@cam.ac.uk. Supplementary data are available at Bioinformatics online.

  9. Heart Failure: Diagnosis, Severity Estimation and Prediction of Adverse Events Through Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Evanthia E. Tripoliti

    Full Text Available Heart failure is a serious condition with high prevalence (about 2% in the adult population in developed countries, and more than 8% in patients older than 75 years. About 3–5% of hospital admissions are linked with heart failure incidents. Heart failure is the first cause of admission by healthcare professionals in their clinical practice. The costs are very high, reaching up to 2% of the total health costs in the developed countries. Building an effective disease management strategy requires analysis of large amount of data, early detection of the disease, assessment of the severity and early prediction of adverse events. This will inhibit the progression of the disease, will improve the quality of life of the patients and will reduce the associated medical costs. Toward this direction machine learning techniques have been employed. The aim of this paper is to present the state-of-the-art of the machine learning methodologies applied for the assessment of heart failure. More specifically, models predicting the presence, estimating the subtype, assessing the severity of heart failure and predicting the presence of adverse events, such as destabilizations, re-hospitalizations, and mortality are presented. According to the authors' knowledge, it is the first time that such a comprehensive review, focusing on all aspects of the management of heart failure, is presented. Keywords: Heart failure, Diagnosis, Prediction, Severity estimation, Classification, Data mining

  10. Analyses of computer programs for the probabilistic estimation of design earthquake and seismological characteristics of the Korean Peninsula

    International Nuclear Information System (INIS)

    Lee, Gi Hwa

    1997-11-01

    The purpose of the present study is to develop predictive equations from simulated motions which are adequate for the Korean Peninsula and analyze and utilize the computer programs for the probabilistic estimation of design earthquakes. In part I of the report, computer programs for the probabilistic estimation of design earthquake are analyzed and applied to the seismic hazard characterizations in the Korean Peninsula. In part II of the report, available instrumental earthquake records are analyzed to estimate earthquake source characteristics and medium properties, which are incorporated into simulation process. And earthquake records are simulated by using the estimated parameters. Finally, predictive equations constructed from the simulation are given in terms of magnitude and hypocentral distances

  11. Using A Priori Information to Improve Atmospheric Duct Estimation

    Science.gov (United States)

    Zhao, X.

    2017-12-01

    Knowledge of refractivity condition in the marine atmospheric boundary layer (MABL) is crucial for the prediction of radar and communication systems performance at frequencies above 1 GHz on low-altitude paths. Since early this century, the `refractivity from clutter (RFC)' technique has been proved to be an effective way to estimate the MABL refractivity structure. Refractivity model is very important for RFC techniques. If prior knowledge of the local refractivity information is available (e.g., from numerical weather prediction models, atmospheric soundings, etc.), more accurate parameterized refractivity model can be constructed by the statistical method, e.g. principal analysis, which in turn can be used to improve the quality of the local refractivity retrievals. This work extends the adjoint parabolic equation approach to range-varying atmospheric duct structure inversions, in which a linear empirical reduced-dimension refractivity model constructed from the a priori refractive information is used.

  12. Estimation of genomic prediction accuracy from reference populations with varying degrees of relationship.

    Directory of Open Access Journals (Sweden)

    S Hong Lee

    Full Text Available Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne. Both the effective number of chromosome segments (Me and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data in animal, plant and human genetics.

  13. Reliability of mobile systems in construction

    Science.gov (United States)

    Narezhnaya, Tamara; Prykina, Larisa

    2017-10-01

    The purpose of the article is to analyze the influence of the mobility of construction production in the article taking into account the properties of reliability and readiness. Basing on the studied systems the effectiveness and efficiency is estimated. The construction system is considered to be the complete organizational structure providing creation or updating of construction facilities. At the same time the production sphere of these systems joins the production on the building site itself, material and technical resources of the construction production and live labour in these spheres within the construction dynamics. The author concludes, that the estimation of the degree of mobility of systems the of construction production makes a great positive effect in the project.

  14. Modeling Of Construction Noise For Environmental Impact Assessment

    Directory of Open Access Journals (Sweden)

    Mohamed F. Hamoda

    2008-06-01

    Full Text Available This study measured the noise levels generated at different construction sites in reference to the stage of construction and the equipment used, and examined the methods to predict such noise in order to assess the environmental impact of noise. It included 33 construction sites in Kuwait and used artificial neural networks (ANNs for the prediction of noise. A back-propagation neural network (BPNN model was compared with a general regression neural network (GRNN model. The results obtained indicated that the mean equivalent noise level was 78.7 dBA which exceeds the threshold limit. The GRNN model was superior to the BPNN model in its accuracy of predicting construction noise due to its ability to train quickly on sparse data sets. Over 93% of the predictions were within 5% of the observed values. The mean absolute error between the predicted and observed data was only 2 dBA. The ANN modeling proved to be a useful technique for noise predictions required in the assessment of environmental impact of construction activities.

  15. Features of construction of the individual trajectory education to computer science on the basis dynamic integrated estimation of level of knowledge

    Directory of Open Access Journals (Sweden)

    Ольга Юрьевна Заславская

    2010-12-01

    Full Text Available In article features of realisation of the mechanism of construction of an optimum trajectory of education to computer science on the basis of a dynamic integrated estimation of level of knowledge are considered.

  16. Planning level assessment of greenhouse gas emissions for alternative transportation construction projects : carbon footprint estimator, phase II, volume I - GASCAP model.

    Science.gov (United States)

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  17. Predictive, Construct, and Convergent Validity of General and Domain-Specific Measures of Hope for College Student Academic Achievement

    Science.gov (United States)

    Robinson, Cecil; Rose, Sage

    2010-01-01

    One leading version of hope theory posits hope to be a general disposition for goal-directed agency and pathways thinking. Domain-specific hope theory suggests that hope operates within context and measures of hope should reflect that context. This study examined three measures of hope to test the predictive, construct, and convergent validity…

  18. Estimating the Accuracy of the Chedoke-McMaster Stroke Assessment Predictive Equations for Stroke Rehabilitation.

    Science.gov (United States)

    Dang, Mia; Ramsaran, Kalinda D; Street, Melissa E; Syed, S Noreen; Barclay-Goddard, Ruth; Stratford, Paul W; Miller, Patricia A

    2011-01-01

    To estimate the predictive accuracy and clinical usefulness of the Chedoke-McMaster Stroke Assessment (CMSA) predictive equations. A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from -0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted.

  19. An examination of fuel consumption trends in construction projects

    International Nuclear Information System (INIS)

    Peters, Valerie A.; Manley, Dawn K.

    2012-01-01

    Recent estimates of fuel consumption in construction projects are highly variable. Lack of standards for reporting at both the equipment and project levels make it difficult to quantify the magnitude of fuel consumption and the associated opportunities for efficiency improvements in construction projects. In this study, we examined clusters of Environmental Impact Reports for seemingly similar construction projects in California. We observed that construction projects are not characterized consistently by task or equipment. We found wide variations in estimates for fuel use in terms of tasks, equipment, and overall projects, which may be attributed in part to inconsistencies in methodology and parameter ranges. Our analysis suggests that standardizing fuel consumption reporting and estimation methodologies for construction projects would enable quantification of opportunities for efficiency improvements at both the equipment and project levels. With increasing emphasis on reducing fossil fuel consumption, it will be important to quantify opportunities to increase fuel efficiency, including across the construction sector. - Highlights: ► An analysis of construction projects reveals inconsistencies in fuel use estimates. ► Fuel consumption estimates for similar construction equipment can vary greatly. ► Standards would help to quantify efficiency opportunities in construction.

  20. Predicting Young Adults Binge Drinking in Nightlife Scenes: An Evaluation of the D-ARIANNA Risk Estimation Model.

    Science.gov (United States)

    Crocamo, Cristina; Bartoli, Francesco; Montomoli, Cristina; Carrà, Giuseppe

    2018-05-25

    Binge drinking (BD) among young people has significant public health implications. Thus, there is the need to target users most at risk. We estimated the discriminative accuracy of an innovative model nested in a recently developed e-Health app (Digital-Alcohol RIsk Alertness Notifying Network for Adolescents and young adults [D-ARIANNA]) for BD in young people, examining its performance to predict short-term BD episodes. We consecutively recruited young adults in pubs, discos, or live music events. Participants self-administered the app D-ARIANNA, which incorporates an evidence-based risk estimation model for the dependent variable BD. They were re-evaluated after 2 weeks using a single-item BD behavior as reference. We estimated D-ARIANNA discriminative ability through measures of sensitivity and specificity, and also likelihood ratios. ROC curve analyses were carried out, exploring variability of discriminative ability across subgroups. The analyses included 507 subjects, of whom 18% reported at least 1 BD episode at follow-up. The majority of these had been identified as at high/moderate or high risk (65%) at induction. Higher scores from the D-ARIANNA risk estimation model reflected an increase in the likelihood of BD. Additional risk factors such as high pocket money availability and alcohol expectancies influence the predictive ability of the model. The D-ARIANNA model showed an appreciable, though modest, predictive ability for subsequent BD episodes. Post-hoc model showed slightly better predictive properties. Using up-to-date technology, D-ARIANNA appears an innovative and promising screening tool for BD among young people. Long-term impact remains to be established, and also the role of additional social and environmental factors.

  1. Development of a hybrid model to predict construction and demolition waste: China as a case study.

    Science.gov (United States)

    Song, Yiliao; Wang, Yong; Liu, Feng; Zhang, Yixin

    2017-01-01

    Construction and demolition waste (C&DW) is currently a worldwide issue, and the situation is the worst in China due to a rapid increase in the construction industry and the short life span of China's buildings. To create an opportunity out of this problem, comprehensive prevention measures and effective management strategies are urgently needed. One major gap in the literature of waste management is a lack of estimations on future C&DW generation. Therefore, this paper presents a forecasting procedure for C&DW in China that can forecast the quantity of each component in such waste. The proposed approach is based on a GM-SVR model that improves the forecasting effectiveness of the gray model (GM), which is achieved by adjusting the residual series by a support vector regression (SVR) method and a transition matrix that aims to estimate the discharge of each component in the C&DW. Through the proposed method, future C&DW volume are listed and analyzed containing their potential components and distribution in different provinces in China. Besides, model testing process provides mathematical evidence to validate the proposed model is an effective way to give future information of C&DW for policy makers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  3. Application of an estimation model to predict future transients at US nuclear power plants

    International Nuclear Information System (INIS)

    Hallbert, B.P.; Blackman, H.S.

    1987-01-01

    A model developed by R.A. Fisher was applied to a set of Licensee Event Reports (LERs) summarizing transient initiating events at US commercial nuclear power plants. The empirical Bayes model was examined to study the feasibility of estimating the number of categories of transients which have not yet occurred at nuclear power plants. An examination of the model's predictive ability using an existing sample of data provided support for use of the model to estimate future transients. The estimate indicates that an approximate fifteen percent increase in the number of categories of transient initiating events may be expected during the period 1983--1993, assuming a stable process of transients. Limitations of the model and other possible applications are discussed. 10 refs., 1 fig., 3 tabs

  4. Dataset size and composition impact the reliability of performance benchmarks for peptide-MHC binding predictions

    DEFF Research Database (Denmark)

    Kim, Yohan; Sidney, John; Buus, Søren

    2014-01-01

    Background: It is important to accurately determine the performance of peptide: MHC binding predictions, as this enables users to compare and choose between different prediction methods and provides estimates of the expected error rate. Two common approaches to determine prediction performance...... are cross-validation, in which all available data are iteratively split into training and testing data, and the use of blind sets generated separately from the data used to construct the predictive method. In the present study, we have compared cross-validated prediction performances generated on our last...

  5. Reliance on and Reliability of the Engineer’s Estimate in Heavy Civil Projects

    Directory of Open Access Journals (Sweden)

    George Okere

    2017-06-01

    Full Text Available To the contractor, the engineer’s estimate is the target number to aim for, and the basis for a contractor to evaluate the accuracy of their estimate. To the owner, the engineer’s estimate is the basis for funding, evaluation of bids, and for predicting project costs. As such the engineer’s estimate is the benchmark. This research sought to investigate the reliance on, and the reliability of the engineer’s estimate in heavy civil cost estimate. The research objective was to characterize the engineer’s estimate and allow owners and contractors re-evaluate or affirm their reliance on the engineer’s estimate. A literature review was conducted to understand the reliance on the engineer’s estimate, and secondary data from Washington State Department of Transportation was used to investigate the reliability of the engineer’s estimate. The findings show the need for practitioners to re-evaluate their reliance on the engineer’s estimate. The empirical data showed that, within various contexts, the engineer’s estimate fell outside the expected accuracy range of the low bids or the cost to complete projects. The study recommends direct tracking of costs by project owners while projects are under construction, the use of a second estimate to improve the accuracy of their estimates, and use of the cost estimating practices found in highly reputable construction companies.

  6. Statistical methods of estimating mining costs

    Science.gov (United States)

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  7. Estimating NOA Health Risks from Selected Construction Activities at the Calaveras Dam Replacement Project (CDRP)

    Science.gov (United States)

    Hernandez, D. W.

    2012-12-01

    The CDRP is a major construction project involving up to 400 workers using heavy earth moving equipment, blasting, drilling, rock crushing, and other techniques designed to move 7 million yards of earth. Much of this material is composed of serpentinite, blueschist, and other rocks that contain chrysotile, crocidolite, actinolite, tremolite, and Libby-class amphiboles. To date, over 1,000 personal, work area, and emission inventory related samples have been collected and analyzed by NIOSH 7400, NIOSH 7402, and CARB-AHERA methodology. Data indicate that various CDRP construction activities have the potential to generate significant mineral fibers and structures that could represent elevated on site and off site health risks. This presentation will review the Contractors air monitoring program for this major project, followed by a discussion of predictive methods to evaluate potential onsite and offsite risks. Ultimately, the data are used for planning control strategies designed to achieve a Project Action Level of 0.01 f/cc (one tenth the Cal/OSHA PEL) and risk-based offsite target levels.

  8. A Hierarchical Approach to Persistent Scatterer Network Construction and Deformation Time Series Estimation

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2014-12-01

    Full Text Available This paper presents a hierarchical approach to network construction and time series estimation in persistent scatterer interferometry (PSI for deformation analysis using the time series of high-resolution satellite SAR images. To balance between computational efficiency and solution accuracy, a dividing and conquering algorithm (i.e., two levels of PS networking and solution is proposed for extracting deformation rates of a study area. The algorithm has been tested using 40 high-resolution TerraSAR-X images collected between 2009 and 2010 over Tianjin in China for subsidence analysis, and validated by using the ground-based leveling measurements. The experimental results indicate that the hierarchical approach can remarkably reduce computing time and memory requirements, and the subsidence measurements derived from the hierarchical solution are in good agreement with the leveling data.

  9. Predicting the effectiveness of road safety campaigns through alternative research designs.

    Science.gov (United States)

    Adamos, Giannis; Nathanail, Eftihia

    2016-12-01

    A large number of road safety communication campaigns have been designed and implemented in the recent years; however their explicit impact on driving behavior and road accident rates has been estimated in a rather low proportion. Based on the findings of the evaluation of three road safety communication campaigns addressing the issues of drinking and driving, seat belt usage, and driving fatigue, this paper applies different types of research designs (i.e., experimental, quasi-experimental, and non-experimental designs), when estimating the effectiveness of road safety campaigns, implements a cross-design assessment, and conducts a cross-campaign evaluation. An integrated evaluation plan was developed, taking into account the structure of evaluation questions, the definition of measurable variables, the separation of the target audience into intervention (exposed to the campaign) and control (not exposed to the campaign) groups, the selection of alternative research designs, and the appropriate data collection methods and techniques. Evaluating the implementation of different research designs in estimating the effectiveness of road safety campaigns, results showed that the separate pre-post samples design demonstrated better predictability than other designs, especially in data obtained from the intervention group after the realization of the campaign. The more constructs that were added to the independent variables, the higher the values of the predictability were. The construct that most affects behavior is intention, whereas the rest of the constructs have a lower impact on behavior. This is particularly significant in the Health Belief Model (HBM). On the other hand, behavioral beliefs, normative beliefs, and descriptive norms, are significant parameters for predicting intention according to the Theory of Planned Behavior (TPB). The theoretical and applied implications of alternative research designs and their applicability in the evaluation of road safety

  10. Estimation of Resting Energy Expenditure: Validation of Previous and New Predictive Equations in Obese Children and Adolescents.

    Science.gov (United States)

    Acar-Tek, Nilüfer; Ağagündüz, Duygu; Çelik, Bülent; Bozbulut, Rukiye

    2017-08-01

    Accurate estimation of resting energy expenditure (REE) in childrenand adolescents is important to establish estimated energy requirements. The aim of the present study was to measure REE in obese children and adolescents by indirect calorimetry method, compare these values with REE values estimated by equations, and develop the most appropriate equation for this group. One hundred and three obese children and adolescents (57 males, 46 females) between 7 and 17 years (10.6 ± 2.19 years) were recruited for the study. REE measurements of subjects were made with indirect calorimetry (COSMED, FitMatePro, Rome, Italy) and body compositions were analyzed. In females, the percentage of accurate prediction varied from 32.6 (World Health Organization [WHO]) to 43.5 (Molnar and Lazzer). The bias for equations was -0.2% (Kim), 3.7% (Molnar), and 22.6% (Derumeaux-Burel). Kim's (266 kcal/d), Schmelzle's (267 kcal/d), and Henry's equations (268 kcal/d) had the lowest root mean square error (RMSE; respectively 266, 267, 268 kcal/d). The equation that has the highest RMSE values among female subjects was the Derumeaux-Burel equation (394 kcal/d). In males, when the Institute of Medicine (IOM) had the lowest accurate prediction value (12.3%), the highest values were found using Schmelzle's (42.1%), Henry's (43.9%), and Müller's equations (fat-free mass, FFM; 45.6%). When Kim and Müller had the smallest bias (-0.6%, 9.9%), Schmelzle's equation had the smallest RMSE (331 kcal/d). The new specific equation based on FFM was generated as follows: REE = 451.722 + (23.202 * FFM). According to Bland-Altman plots, it has been found out that the new equations are distributed randomly in both males and females. Previously developed predictive equations mostly provided unaccurate and biased estimates of REE. However, the new predictive equations allow clinicians to estimate REE in an obese children and adolescents with sufficient and acceptable accuracy.

  11. Estimation of National Colorectal-Cancer Incidence Using Claims Databases

    International Nuclear Information System (INIS)

    Quantin, C.; Benzenine, E.; Hagi, M.; Auverlot, B.; Cottenet, J.; Binquet, M.; Compain, D.

    2012-01-01

    The aim of the study was to assess the accuracy of the colorectal-cancer incidence estimated from administrative data. Methods. We selected potential incident colorectal-cancer cases in 2004-2005 French administrative data, using two alternative algorithms. The first was based only on diagnostic and procedure codes, whereas the second considered the past history of the patient. Results of both methods were assessed against two corresponding local cancer registries, acting as “gold standards.” We then constructed a multivariable regression model to estimate the corrected total number of incident colorectal-cancer cases from the whole national administrative database. Results. The first algorithm provided an estimated local incidence very close to that given by the regional registries (646 versus 645 incident cases) and had good sensitivity and positive predictive values (about 75% for both). The second algorithm overestimated the incidence by about 50% and had a poor positive predictive value of about 60%. The estimation of national incidence obtained by the first algorithm differed from that observed in 14 registries by only 2.34%. Conclusion. This study shows the usefulness of administrative databases for countries with no national cancer registry and suggests a method for correcting the estimates provided by these data.

  12. Predictive ability of genomic selection models for breeding value estimation on growth traits of Pacific white shrimp Litopenaeus vannamei

    Science.gov (United States)

    Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai

    2017-09-01

    Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.

  13. Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?

    Science.gov (United States)

    Staatz, Christine E; Tett, Susan E

    2011-12-01

    This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared

  14. Worldwide construction

    International Nuclear Information System (INIS)

    Williamson, M.

    1994-01-01

    The paper lists major construction projects in worldwide processing and pipelining, showing capacities, contractors, estimated costs, and time of construction. The lists are divided into refineries, petrochemical plants, sulfur recovery units, gas processing plants, pipelines, and related fuel facilities. This last classification includes cogeneration plants, coal liquefaction and gasification plants, biomass power plants, geothermal power plants, integrated coal gasification combined-cycle power plants, and a coal briquetting plant

  15. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    Science.gov (United States)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  16. Prediction of Monte Carlo errors by a theory generalized to treat track-length estimators

    International Nuclear Information System (INIS)

    Booth, T.E.; Amster, H.J.

    1978-01-01

    Present theories for predicting expected Monte Carlo errors in neutron transport calculations apply to estimates of flux-weighted integrals sampled directly by scoring individual collisions. To treat track-length estimators, the recent theory of Amster and Djomehri is generalized to allow the score distribution functions to depend on the coordinates of two successive collisions. It has long been known that the expected track length in a region of phase space equals the expected flux integrated over that region, but that the expected statistical error of the Monte Carlo estimate of the track length is different from that of the flux integral obtained by sampling the sum of the reciprocals of the cross sections for all collisions in the region. These conclusions are shown to be implied by the generalized theory, which provides explicit equations for the expected values and errors of both types of estimators. Sampling expected contributions to the track-length estimator is also treated. Other general properties of the errors for both estimators are derived from the equations and physically interpreted. The actual values of these errors are then obtained and interpreted for a simple specific example

  17. Improving filtering and prediction of spatially extended turbulent systems with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates

  18. Estimating the Accuracy of the Chedoke–McMaster Stroke Assessment Predictive Equations for Stroke Rehabilitation

    Science.gov (United States)

    Dang, Mia; Ramsaran, Kalinda D.; Street, Melissa E.; Syed, S. Noreen; Barclay-Goddard, Ruth; Miller, Patricia A.

    2011-01-01

    ABSTRACT Purpose: To estimate the predictive accuracy and clinical usefulness of the Chedoke–McMaster Stroke Assessment (CMSA) predictive equations. Method: A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Results: Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from −0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. Conclusions: This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted. PMID:22654239

  19. Quantifying predictability through information theory: small sample estimation in a non-Gaussian framework

    International Nuclear Information System (INIS)

    Haven, Kyle; Majda, Andrew; Abramov, Rafail

    2005-01-01

    Many situations in complex systems require quantitative estimates of the lack of information in one probability distribution relative to another. In short term climate and weather prediction, examples of these issues might involve the lack of information in the historical climate record compared with an ensemble prediction, or the lack of information in a particular Gaussian ensemble prediction strategy involving the first and second moments compared with the non-Gaussian ensemble itself. The relative entropy is a natural way to quantify the predictive utility in this information, and recently a systematic computationally feasible hierarchical framework has been developed. In practical systems with many degrees of freedom, computational overhead limits ensemble predictions to relatively small sample sizes. Here the notion of predictive utility, in a relative entropy framework, is extended to small random samples by the definition of a sample utility, a measure of the unlikeliness that a random sample was produced by a given prediction strategy. The sample utility is the minimum predictability, with a statistical level of confidence, which is implied by the data. Two practical algorithms for measuring such a sample utility are developed here. The first technique is based on the statistical method of null-hypothesis testing, while the second is based upon a central limit theorem for the relative entropy of moment-based probability densities. These techniques are tested on known probability densities with parameterized bimodality and skewness, and then applied to the Lorenz '96 model, a recently developed 'toy' climate model with chaotic dynamics mimicking the atmosphere. The results show a detection of non-Gaussian tendencies of prediction densities at small ensemble sizes with between 50 and 100 members, with a 95% confidence level

  20. Genomic prediction when some animals are not genotyped

    Directory of Open Access Journals (Sweden)

    Lund Mogens S

    2010-01-01

    Full Text Available Abstract Background The use of genomic selection in breeding programs may increase the rate of genetic improvement, reduce the generation time, and provide higher accuracy of estimated breeding values (EBVs. A number of different methods have been developed for genomic prediction of breeding values, but many of them assume that all animals have been genotyped. In practice, not all animals are genotyped, and the methods have to be adapted to this situation. Results In this paper we provide an extension of a linear mixed model method for genomic prediction to the situation with non-genotyped animals. The model specifies that a breeding value is the sum of a genomic and a polygenic genetic random effect, where genomic genetic random effects are correlated with a genomic relationship matrix constructed from markers and the polygenic genetic random effects are correlated with the usual relationship matrix. The extension of the model to non-genotyped animals is made by using the pedigree to derive an extension of the genomic relationship matrix to non-genotyped animals. As a result, in the extended model the estimated breeding values are obtained by blending the information used to compute traditional EBVs and the information used to compute purely genomic EBVs. Parameters in the model are estimated using average information REML and estimated breeding values are best linear unbiased predictions (BLUPs. The method is illustrated using a simulated data set. Conclusions The extension of the method to non-genotyped animals presented in this paper makes it possible to integrate all the genomic, pedigree and phenotype information into a one-step procedure for genomic prediction. Such a one-step procedure results in more accurate estimated breeding values and has the potential to become the standard tool for genomic prediction of breeding values in future practical evaluations in pig and cattle breeding.

  1. Social network models predict movement and connectivity in ecological landscapes

    Science.gov (United States)

    Fletcher, Robert J.; Acevedo, M.A.; Reichert, Brian E.; Pias, Kyle E.; Kitchens, Wiley M.

    2011-01-01

    Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.

  2. Social network models predict movement and connectivity in ecological landscapes.

    Science.gov (United States)

    Fletcher, Robert J; Acevedo, Miguel A; Reichert, Brian E; Pias, Kyle E; Kitchens, Wiley M

    2011-11-29

    Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.

  3. Asymptotically Constant-Risk Predictive Densities When the Distributions of Data and Target Variables Are Different

    Directory of Open Access Journals (Sweden)

    Keisuke Yano

    2014-05-01

    Full Text Available We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and the Fisher information matrix for the target variables. We assume that the trace has a unique maximum point with respect to the parameter. We construct asymptotically constant-risk Bayesian predictive densities using a prior depending on the sample size. Further, we apply the theory to the subminimax estimator problem and the prediction based on the binary regression model.

  4. Spark ignition engine control: estimation and prediction of the in-cylinder mass and chemical species; Controle moteur a allumage commande: estimation / prediction de la masse et de la composition du melange enferme dans le cylindre

    Energy Technology Data Exchange (ETDEWEB)

    Giansetti, P.

    2005-09-15

    Spark ignition engine control has become a major issue regarding compliance with emissions legislation while ensuring driving comfort. The objective of this thesis was to estimate the mass and composition of gases inside the cylinder of an engine based on physics in order to insure better control of transient phases taking into account residual gases as well as exhaust gas recirculation. Residual gas fraction has been characterized using two experiments and one CFD code. A model has been validated experimentally and integrated into an observer which predicts pressure and temperature inside the manifold. The predictions of the different gas flows and the chemical species inside the cylinder are deduced. A closed loop observer has been validated experimentally and in simulation. Moreover, an algorithm estimating the fresh and burned gas mass from the cylinder pressure has been proposed in order to obtain the information cycle by cycle and cylinder by cylinder. (author)

  5. Construction, internal validation and implementation in a mobile application of a scoring system to predict nonadherence to proton pump inhibitors

    Directory of Open Access Journals (Sweden)

    Emma Mares-García

    2017-06-01

    Full Text Available Background Other studies have assessed nonadherence to proton pump inhibitors (PPIs, but none has developed a screening test for its detection. Objectives To construct and internally validate a predictive model for nonadherence to PPIs. Methods This prospective observational study with a one-month follow-up was carried out in 2013 in Spain, and included 302 patients with a prescription for PPIs. The primary variable was nonadherence to PPIs (pill count. Secondary variables were gender, age, antidepressants, type of PPI, non-guideline-recommended prescription (NGRP of PPIs, and total number of drugs. With the secondary variables, a binary logistic regression model to predict nonadherence was constructed and adapted to a points system. The ROC curve, with its area (AUC, was calculated and the optimal cut-off point was established. The points system was internally validated through 1,000 bootstrap samples and implemented in a mobile application (Android. Results The points system had three prognostic variables: total number of drugs, NGRP of PPIs, and antidepressants. The AUC was 0.87 (95% CI [0.83–0.91], p < 0.001. The test yielded a sensitivity of 0.80 (95% CI [0.70–0.87] and a specificity of 0.82 (95% CI [0.76–0.87]. The three parameters were very similar in the bootstrap validation. Conclusions A points system to predict nonadherence to PPIs has been constructed, internally validated and implemented in a mobile application. Provided similar results are obtained in external validation studies, we will have a screening tool to detect nonadherence to PPIs.

  6. Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling

    Science.gov (United States)

    Xing, Yafei; Macq, Benoit

    2017-11-01

    With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.

  7. PREVAIL: Predicting Recovery through Estimation and Visualization of Active and Incident Lesions.

    Science.gov (United States)

    Dworkin, Jordan D; Sweeney, Elizabeth M; Schindler, Matthew K; Chahin, Salim; Reich, Daniel S; Shinohara, Russell T

    2016-01-01

    The goal of this study was to develop a model that integrates imaging and clinical information observed at lesion incidence for predicting the recovery of white matter lesions in multiple sclerosis (MS) patients. Demographic, clinical, and magnetic resonance imaging (MRI) data were obtained from 60 subjects with MS as part of a natural history study at the National Institute of Neurological Disorders and Stroke. A total of 401 lesions met the inclusion criteria and were used in the study. Imaging features were extracted from the intensity-normalized T1-weighted (T1w) and T2-weighted sequences as well as magnetization transfer ratio (MTR) sequence acquired at lesion incidence. T1w and MTR signatures were also extracted from images acquired one-year post-incidence. Imaging features were integrated with clinical and demographic data observed at lesion incidence to create statistical prediction models for long-term damage within the lesion. The performance of the T1w and MTR predictions was assessed in two ways: first, the predictive accuracy was measured quantitatively using leave-one-lesion-out cross-validated (CV) mean-squared predictive error. Then, to assess the prediction performance from the perspective of expert clinicians, three board-certified MS clinicians were asked to individually score how similar the CV model-predicted one-year appearance was to the true one-year appearance for a random sample of 100 lesions. The cross-validated root-mean-square predictive error was 0.95 for normalized T1w and 0.064 for MTR, compared to the estimated measurement errors of 0.48 and 0.078 respectively. The three expert raters agreed that T1w and MTR predictions closely resembled the true one-year follow-up appearance of the lesions in both degree and pattern of recovery within lesions. This study demonstrates that by using only information from a single visit at incidence, we can predict how a new lesion will recover using relatively simple statistical techniques. The

  8. Prediction of concrete compressive strength considering humidity and temperature in the construction of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seung Hee; Jang, Kyung Pil [Department of Civil and Environmental Engineering, Myongji University, Yongin (Korea, Republic of); Bang, Jin-Wook [Department of Civil Engineering, Chungnam National University, Daejeon (Korea, Republic of); Lee, Jang Hwa [Structural Engineering Research Division, Korea Institute of Construction Technology (Korea, Republic of); Kim, Yun Yong, E-mail: yunkim@cnu.ac.kr [Structural Engineering Research Division, Korea Institute of Construction Technology (Korea, Republic of)

    2014-08-15

    Highlights: • Compressive strength tests for three concrete mixes were performed. • The parameters of the humidity-adjusted maturity function were determined. • Strength can be predicted considering temperature and relative humidity. - Abstract: This study proposes a method for predicting compressive strength developments in the early ages of concretes used in the construction of nuclear power plants. Three representative mixes with strengths of 6000 psi (41.4 MPa), 4500 psi (31.0 MPa), and 4000 psi (27.6 MPa) were selected and tested under various curing conditions; the temperature ranged from 10 to 40 °C, and the relative humidity from 40 to 100%. In order to consider not only the effect of the temperature but also that of humidity, an existing model, i.e. the humidity-adjusted maturity function, was adopted and the parameters used in the function were determined from the test results. A series of tests were also performed in the curing condition of a variable temperature and constant humidity, and a comparison between the measured and predicted strengths were made for the verification.

  9. Predicting sugar-sweetened behaviours with theory of planned behaviour constructs: Outcome and process results from the SIPsmartER behavioural intervention.

    Science.gov (United States)

    Zoellner, Jamie M; Porter, Kathleen J; Chen, Yvonnes; Hedrick, Valisa E; You, Wen; Hickman, Maja; Estabrooks, Paul A

    2017-05-01

    Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13-20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6-38%) and behaviour (average 30%, range 6-55%) were significant. Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases.

  10. Predicting sugar-sweetened behaviours with theory of planned behaviour constructs: Outcome and process results from the SIPsmartER behavioural intervention

    Science.gov (United States)

    Zoellner, Jamie M.; Porter, Kathleen J.; Chen, Yvonnes; Hedrick, Valisa E.; You, Wen; Hickman, Maja; Estabrooks, Paul A.

    2017-01-01

    Objective Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Design Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Main Outcome Measures Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. Results TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13–20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6–38%) and behaviour (average 30%, range 6–55%) were significant. Conclusion Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases. PMID:28165771

  11. Analysis of the earthquake data and estimation of source parameters in the Kyungsang basin

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jeong-Moon; Lee, Jun-Hee [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-04-01

    The purpose of the present study is to determine the response spectrum for the Korean Peninsula and estimate the seismic source parameters and analyze and simulate the ground motion adequately from the seismic characteristics of Korean Peninsula and compare this with the real data. The estimated seismic source parameters such as apparent seismic stress drop is somewhat unstable because the data are insufficient. When the instrumental earthquake data were continuously accumulated in the future, the modification of these parameters may be developed. Although equations presented in this report are derived from the limited data, they can be utilized both in seismology and earthquake engineering. Finally, predictive equations may be given in terms of magnitude and hypocentral distances using these parameters. The estimation of the predictive equation constructed from the simulation is the object of further study. 34 refs., 27 figs., 10 tabs. (Author)

  12. A BIM-based system for demolition and renovation waste estimation and planning.

    Science.gov (United States)

    Cheng, Jack C P; Ma, Lauren Y H

    2013-06-01

    Due to the rising worldwide awareness of green environment, both government and contractors have to consider effective construction and demolition (C&D) waste management practices. The last two decades have witnessed the growing importance of demolition and renovation (D&R) works and the growing amount of D&R waste disposed to landfills every day, especially in developed cities like Hong Kong. Quantitative waste prediction is crucial for waste management. It can enable contractors to pinpoint critical waste generation processes and to plan waste control strategies. In addition, waste estimation could also facilitate some government waste management policies, such as the waste disposal charging scheme in Hong Kong. Currently, tools that can accurately and conveniently estimate the amount of waste from construction, renovation, and demolition projects are lacking. In the light of this research gap, this paper presents a building information modeling (BIM) based system that we have developed for estimation and planning of D&R waste. BIM allows multi-disciplinary information to be superimposed within one digital building model. Our system can extract material and volume information through the BIM model and integrate the information for detailed waste estimation and planning. Waste recycling and reuse are also considered in our system. Extracted material information can be provided to recyclers before demolition or renovation to make recycling stage more cooperative and more efficient. Pick-up truck requirements and waste disposal charging fee for different waste facilities will also be predicted through our system. The results could provide alerts to contractors ahead of time at project planning stage. This paper also presents an example scenario with a 47-floor residential building in Hong Kong to demonstrate our D&R waste estimation and planning system. As the BIM technology has been increasingly adopted in the architectural, engineering and construction industry

  13. Unchained Melody: Revisiting the Estimation of SF-6D Values

    Science.gov (United States)

    Craig, Benjamin M.

    2015-01-01

    Purpose In the original SF-6D valuation study, the analytical design inherited conventions that detrimentally affected its ability to predict values on a quality-adjusted life year (QALY) scale. Our objective is to estimate UK values for SF-6D states using the original data and multi-attribute utility (MAU) regression after addressing its limitations and to compare the revised SF-6D and EQ-5D value predictions. Methods Using the unaltered data (611 respondents, 3503 SG responses), the parameters of the original MAU model were re-estimated under 3 alternative error specifications, known as the instant, episodic, and angular random utility models. Value predictions on a QALY scale were compared to EQ-5D3L predictions using the 1996 Health Survey for England. Results Contrary to the original results, the revised SF-6D value predictions range below 0 QALYs (i.e., worse than death) and agree largely with EQ-5D predictions after adjusting for scale. Although a QALY is defined as a year in optimal health, the SF-6D sets a higher standard for optimal health than the EQ-5D-3L; therefore, it has larger units on a QALY scale by construction (20.9% more). Conclusions Much of the debate in health valuation has focused on differences between preference elicitation tasks, sampling, and instruments. After correcting errant econometric practices and adjusting for differences in QALY scale between the EQ-5D and SF-6D values, the revised predictions demonstrate convergent validity, making them more suitable for UK economic evaluations compared to original estimates. PMID:26359242

  14. A Well-Designed Parameter Estimation Method for Lifetime Prediction of Deteriorating Systems with Both Smooth Degradation and Abrupt Damage

    Directory of Open Access Journals (Sweden)

    Chuanqiang Yu

    2015-01-01

    Full Text Available Deteriorating systems, which are subject to both continuous smooth degradation and additional abrupt damages due to a shock process, can be often encountered in engineering. Modeling the degradation evolution and predicting the lifetime of this kind of systems are both interesting and challenging in practice. In this paper, we model the degradation trajectory of the deteriorating system by a random coefficient regression (RCR model with positive jumps, where the RCR part is used to model the continuous smooth degradation of the system and the jump part is used to characterize the abrupt damages due to random shocks. Based on a specified threshold level, the probability density function (PDF and cumulative distribution function (CDF of the lifetime can be derived analytically. The unknown parameters associated with the derived lifetime distributions can be estimated via a well-designed parameter estimation procedure on the basis of the available degradation recordings of the deteriorating systems. An illustrative example is finally provided to demonstrate the implementation and superiority of the newly proposed lifetime prediction method. The experimental results reveal that our proposed lifetime prediction method with the dedicated parameter estimation strategy can get more accurate lifetime predictions than the rival model in literature.

  15. On-Line Flutter Prediction Tool for Wind Tunnel Flutter Testing using Parameter Varying Estimation Methodology, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology, Inc. (ZONA) proposes to develop an on-line flutter prediction tool for wind tunnel model using the parameter varying estimation (PVE) technique to...

  16. Modelling and predicting of commercial property attendance basing on the estimation of its attraction for consumers (by example of shopping malls

    Directory of Open Access Journals (Sweden)

    Varvara Sergeyevna Spirina

    2015-03-01

    Full Text Available Objective to research and elaborate an economicmathematical model of predicting of commercial property attendance by the example of shopping malls based on the estimation of its attraction for consumers. Methods the methodological and theoretical basis for the work was composed of the rules and techniques of elaborating the qualimetry and matrix mechanisms of complex estimation necessary for the estimation and aggregation of factors influencing the choice of a consumersrsquo group among many alternative property venues. Results two mechanisms are elaborated for the complex estimation of commercial property which is necessary to evaluate their attraction for consumers and to predict attendance. By the example of two large shopping malls in Perm Russia it is shown that using both mechanisms in the economicmathematical model of commercial property attendance increases the accuracy of its predictions compared to the traditional Huff model. The reliability of the results is confirmed by the coincidence of the results of calculation and the actual poll data on the shopping malls attendance. Scientific novelty a multifactor model of commercial property attraction for consumers was elaborated by the example of shopping malls parameters of complex estimation mechanisms are defined namely eight parameters influencing the choice of a shopping mall by consumers. The model differs from the traditional Huff model by the number of factors influencing the choice of a shopping mall by consumers and by the higher accuracy of predicting its attendance. Practical significance the economicmathematical models able to predict commercial property attendance can be used for efficient planning of measures to attract consumers to preserve and develop competitive advantages of commercial property. nbsp

  17. The prediction of engineering cost for green buildings based on information entropy

    Science.gov (United States)

    Liang, Guoqiang; Huang, Jinglian

    2018-03-01

    Green building is the developing trend in the world building industry. Additionally, construction costs are an essential consideration in building constructions. Therefore, it is necessary to investigate the problems of cost prediction in green building. On the basis of analyzing the cost of green building, this paper proposes the forecasting method of actual cost in green building based on information entropy and provides the forecasting working procedure. Using the probability density obtained from statistical data, such as labor costs, material costs, machinery costs, administration costs, profits, risk costs a unit project quotation and etc., situations can be predicted which lead to cost variations between budgeted cost and actual cost in constructions, through estimating the information entropy of budgeted cost and actual cost. The research results of this article have a practical significance in cost control of green building. Additionally, the method proposed in this article can be generalized and applied to a variety of other aspects in building management.

  18. Ensemble of data-driven prognostic algorithms for robust prediction of remaining useful life

    International Nuclear Information System (INIS)

    Hu Chao; Youn, Byeng D.; Wang Pingfeng; Taek Yoon, Joung

    2012-01-01

    Prognostics aims at determining whether a failure of an engineered system (e.g., a nuclear power plant) is impending and estimating the remaining useful life (RUL) before the failure occurs. The traditional data-driven prognostic approach is to construct multiple candidate algorithms using a training data set, evaluate their respective performance using a testing data set, and select the one with the best performance while discarding all the others. This approach has three shortcomings: (i) the selected standalone algorithm may not be robust; (ii) it wastes the resources for constructing the algorithms that are discarded; (iii) it requires the testing data in addition to the training data. To overcome these drawbacks, this paper proposes an ensemble data-driven prognostic approach which combines multiple member algorithms with a weighted-sum formulation. Three weighting schemes, namely the accuracy-based weighting, diversity-based weighting and optimization-based weighting, are proposed to determine the weights of member algorithms. The k-fold cross validation (CV) is employed to estimate the prediction error required by the weighting schemes. The results obtained from three case studies suggest that the ensemble approach with any weighting scheme gives more accurate RUL predictions compared to any sole algorithm when member algorithms producing diverse RUL predictions have comparable prediction accuracy and that the optimization-based weighting scheme gives the best overall performance among the three weighting schemes.

  19. The estimation of soil parameters using observations on crop biophysical variables and the crop model STICS improve the predictions of agro environmental variables.

    Science.gov (United States)

    Varella, H.-V.

    2009-04-01

    Dynamic crop models are very useful to predict the behavior of crops in their environment and are widely used in a lot of agro-environmental work. These models have many parameters and their spatial application require a good knowledge of these parameters, especially of the soil parameters. These parameters can be estimated from soil analysis at different points but this is very costly and requires a lot of experimental work. Nevertheless, observations on crops provided by new techniques like remote sensing or yield monitoring, is a possibility for estimating soil parameters through the inversion of crop models. In this work, the STICS crop model is studied for the wheat and the sugar beet and it includes more than 200 parameters. After a previous work based on a large experimental database for calibrate parameters related to the characteristics of the crop, a global sensitivity analysis of the observed variables (leaf area index LAI and absorbed nitrogen QN provided by remote sensing data, and yield at harvest provided by yield monitoring) to the soil parameters is made, in order to determine which of them have to be estimated. This study was made in different climatic and agronomic conditions and it reveals that 7 soil parameters (4 related to the water and 3 related to the nitrogen) have a clearly influence on the variance of the observed variables and have to be therefore estimated. For estimating these 7 soil parameters, a Bayesian data assimilation method is chosen (because of available prior information on these parameters) named Importance Sampling by using observations, on wheat and sugar beet crop, of LAI and QN at various dates and yield at harvest acquired on different climatic and agronomic conditions. The quality of parameter estimation is then determined by comparing the result of parameter estimation with only prior information and the result with the posterior information provided by the Bayesian data assimilation method. The result of the

  20. Implementation of Chaotic Gaussian Particle Swarm Optimization for Optimize Learning-to-Rank Software Defect Prediction Model Construction

    Science.gov (United States)

    Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.

    2018-03-01

    Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.

  1. A generic method for estimating system reliability using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples

  2. A generic method for estimating system reliability using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu

    2009-02-15

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.

  3. A study of the planned value estimation method for developing earned value management system in the nuclear power plant construction project

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S.H.; Moon, B.S., E-mail: gustblast@khnp.co.kr, E-mail: moonbs@khnp.co.kr [Korea Hydro & Nuclear power co.,Ltd., Central Research Inst., Daejeon (Korea, Republic of); Lee, J.H., E-mail: ljh@kkprotech.com [Kong Kwan Protech Co.,Ltd., Seoul (Korea, Republic of)

    2014-07-01

    The Earned Value Management System (EVMS) is a project management technique for measuring project performance and progress, and then forward projection through the integrated management and control of cost and schedule. This research reviewed the concept of the EVMS method, and proposes two Planned Value estimation methods for the potential application to succeeding NPP construction projects by using the historical data from the proceeding NPP projects. This paper is to introduce the solution for the problems caused by the absence of relevant management system incorporating schedule and cost, which has arisen as repeated issues in NPP construction project management. (author)

  4. A study of the planned value estimation method for developing earned value management system in the nuclear power plant construction project

    International Nuclear Information System (INIS)

    Lee, S.H.; Moon, B.S.; Lee, J.H.

    2014-01-01

    The Earned Value Management System (EVMS) is a project management technique for measuring project performance and progress, and then forward projection through the integrated management and control of cost and schedule. This research reviewed the concept of the EVMS method, and proposes two Planned Value estimation methods for the potential application to succeeding NPP construction projects by using the historical data from the proceeding NPP projects. This paper is to introduce the solution for the problems caused by the absence of relevant management system incorporating schedule and cost, which has arisen as repeated issues in NPP construction project management. (author)

  5. Estimating Required Contingency Funds for Construction Projects using Multiple Linear Regression

    National Research Council Canada - National Science Library

    Cook, Jason J

    2006-01-01

    Cost overruns are a critical problem for construction projects. The common practice for dealing with cost overruns is the assignment of an arbitrary flat percentage of the construction budget as a contingency fund...

  6. Reliability of CKD-EPI predictive equation in estimating chronic kidney disease prevalence in the Croatian endemic nephropathy area.

    Science.gov (United States)

    Fuček, Mirjana; Dika, Živka; Karanović, Sandra; Vuković Brinar, Ivana; Premužić, Vedran; Kos, Jelena; Cvitković, Ante; Mišić, Maja; Samardžić, Josip; Rogić, Dunja; Jelaković, Bojan

    2018-02-15

    Chronic kidney disease (CKD) is a significant public health problem and it is not possible to precisely predict its progression to terminal renal failure. According to current guidelines, CKD stages are classified based on the estimated glomerular filtration rate (eGFR) and albuminuria. Aims of this study were to determine the reliability of predictive equation in estimation of CKD prevalence in Croatian areas with endemic nephropathy (EN), compare the results with non-endemic areas, and to determine if the prevalence of CKD stages 3-5 was increased in subjects with EN. A total of 1573 inhabitants of the Croatian Posavina rural area from 6 endemic and 3 non-endemic villages were enrolled. Participants were classified according to the modified criteria of the World Health Organization for EN. Estimated GFR was calculated using Chronic Kidney Disease Epidemiology Collaboration equation (CKD-EPI). The results showed a very high CKD prevalence in the Croatian rural area (19%). CKD prevalence was significantly higher in EN then in non EN villages with the lowest eGFR value in diseased subgroup. eGFR correlated significantly with the diagnosis of EN. Kidney function assessment using CKD-EPI predictive equation proved to be a good marker in differentiating the study subgroups, remained as one of the diagnostic criteria for EN.

  7. Estimation of brachial artery volume flow by duplex ultrasound imaging predicts dialysis access maturation.

    Science.gov (United States)

    Ko, Sae Hee; Bandyk, Dennis F; Hodgkiss-Harlow, Kelley D; Barleben, Andrew; Lane, John

    2015-06-01

    This study validated duplex ultrasound measurement of brachial artery volume flow (VF) as predictor of dialysis access flow maturation and successful hemodialysis. Duplex ultrasound was used to image upper extremity dialysis access anatomy and estimate access VF within 1 to 2 weeks of the procedure. Correlation of brachial artery VF with dialysis access conduit VF was performed using a standardized duplex testing protocol in 75 patients. The hemodynamic data were used to develop brachial artery flow velocity criteria (peak systolic velocity and end-diastolic velocity) predictive of three VF categories: low (800 mL/min). Brachial artery VF was then measured in 148 patients after a primary (n = 86) or revised (n = 62) upper extremity dialysis access procedure, and the VF category correlated with access maturation or need for revision before hemodialysis usage. Access maturation was conferred when brachial artery VF was >600 mL/min and conduit imaging indicated successful cannulation based on anatomic criteria of conduit diameter >5 mm and skin depth 800 mL/min was predicted when the brachial artery lumen diameter was >4.5 mm, peak systolic velocity was >150 cm/s, and the diastolic-to-systolic velocity ratio was >0.4. Brachial artery velocity spectra indicating VF 800 mL/min. Duplex testing to estimate brachial artery VF and assess the conduit for ease of cannulation can be performed in 5 minutes during the initial postoperative vascular clinic evaluation. Estimation of brachial artery VF using the duplex ultrasound, termed the "Fast, 5-min Dialysis Duplex Scan," facilitates patient evaluation after new or revised upper extremity dialysis access procedures. Brachial artery VF correlates with access VF measurements and has the advantage of being easier to perform and applicable for forearm and also arm dialysis access. When brachial artery velocity spectra criteria confirm a VF >800 mL/min, flow maturation and successful hemodialysis are predicted if anatomic criteria

  8. Uncertainty estimates for predictions of the impact of breeder-reactor radionuclide releases

    International Nuclear Information System (INIS)

    Miller, C.W.; Little, C.A.

    1982-01-01

    This paper summarizes estimates, compiled in a larger report, of the uncertainty associated with models and parameters used to assess the impact on man radionuclide releases to the environment by breeder reactor facilities. These estimates indicate that, for many sites, generic models and representative parameter values may reasonably be used to calculate doses from annual average radionuclide releases when these calculated doses are on the order of one-tenth or less of a relevant dose limit. For short-term, accidental releases, the uncertainty in the dose calculations may be much larger than an order of magnitude. As a result, it may be necessary to incorporate site-specific information into the dose calculation under such circumstances. However, even using site-specific information, inherent natural variability within human receptors, and the uncertainties in the dose conversion factor will likely result in an overall uncertainty of greater than an order of magnitude for predictions of dose following short-term releases

  9. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)

    Science.gov (United States)

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  10. Daily river flow prediction based on Two-Phase Constructive Fuzzy Systems Modeling: A case of hydrological - meteorological measurements asymmetry

    Science.gov (United States)

    Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann

    2018-03-01

    Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.

  11. Dynamic Output Feedback Robust Model Predictive Control via Zonotopic Set-Membership Estimation for Constrained Quasi-LPV Systems

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2015-01-01

    Full Text Available For the quasi-linear parameter varying (quasi-LPV system with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC is investigated. The estimation error set is represented by a zonotope and refreshed by the zonotopic set-membership estimation method. By properly refreshing the estimation error set online, the bounds of true state at the next sampling time can be obtained. Furthermore, the feasibility of the main optimization problem at the next sampling time can be determined at the current time. A numerical example is given to illustrate the effectiveness of the approach.

  12. Site characterization and modeling to estimate movement of hazardous materials in groundwater

    International Nuclear Information System (INIS)

    Ditmars, J.D.

    1988-01-01

    A quantitative approach for evaluating the effectiveness of site characterization measurement activities is developed and illustrated with an example application to hypothetical measurement schemes at a potential geologic repository site for radioactive waste. The method is a general one and could also be applied at sites for underground disposal of hazardous chemicals. The approach presumes that measurements will be undertaken to support predictions of the performance of some aspect of a constructed facility or natural system. It requires a quantitative performance objective, such as groundwater travel time or contaminant concentration, against which to compare predictions of performance. The approach recognizes that such predictions are uncertain because the measurements upon which they are based are uncertain. The effectiveness of measurement activities is quantified by a confidence index, β, that reflects the number of standard deviations separating the best estimate of performance from the perdetermined performance objective. Measurements that reduce the uncertainty in predictions lead to increased values of β. The link between measurement and prediction uncertainties, required for the evaluation of β for a particular measurement scheme, identifies the measured quantities that significantly affect prediction uncertainty. The components of uncertainty in those key measurements are spatial variation, noise, estimation error, and measurement bias. 7 refs., 4 figs

  13. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    Science.gov (United States)

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  14. High Sensitivity TSS Prediction: Estimates of Locations Where TSS Cannot Occur

    KAUST Repository

    Schaefer, Ulf

    2013-10-10

    Background Although transcription in mammalian genomes can initiate from various genomic positions (e.g., 3′UTR, coding exons, etc.), most locations on genomes are not prone to transcription initiation. It is of practical and theoretical interest to be able to estimate such collections of non-TSS locations (NTLs). The identification of large portions of NTLs can contribute to better focusing the search for TSS locations and thus contribute to promoter and gene finding. It can help in the assessment of 5′ completeness of expressed sequences, contribute to more successful experimental designs, as well as more accurate gene annotation. Methodology Using comprehensive collections of Cap Analysis of Gene Expression (CAGE) and other transcript data from mouse and human genomes, we developed a methodology that allows us, by performing computational TSS prediction with very high sensitivity, to annotate, with a high accuracy in a strand specific manner, locations of mammalian genomes that are highly unlikely to harbor transcription start sites (TSSs). The properties of the immediate genomic neighborhood of 98,682 accurately determined mouse and 113,814 human TSSs are used to determine features that distinguish genomic transcription initiation locations from those that are not likely to initiate transcription. In our algorithm we utilize various constraining properties of features identified in the upstream and downstream regions around TSSs, as well as statistical analyses of these surrounding regions. Conclusions Our analysis of human chromosomes 4, 21 and 22 estimates ~46%, ~41% and ~27% of these chromosomes, respectively, as being NTLs. This suggests that on average more than 40% of the human genome can be expected to be highly unlikely to initiate transcription. Our method represents the first one that utilizes high-sensitivity TSS prediction to identify, with high accuracy, large portions of mammalian genomes as NTLs. The server with our algorithm implemented is

  15. Predictive modelling of Lactobacillus casei KN291 survival in fermented soy beverage.

    Science.gov (United States)

    Zielińska, Dorota; Dorota, Zielińska; Kołożyn-Krajewska, Danuta; Danuta, Kołożyn-Krajewska; Goryl, Antoni; Antoni, Goryl; Motyl, Ilona

    2014-02-01

    The aim of the study was to construct and verify predictive growth and survival models of a potentially probiotic bacteria in fermented soy beverage. The research material included natural soy beverage (Polgrunt, Poland) and the strain of lactic acid bacteria (LAB) - Lactobacillus casei KN291. To construct predictive models for the growth and survival of L. casei KN291 bacteria in the fermented soy beverage we design an experiment which allowed the collection of CFU data. Fermented soy beverage samples were stored at various temperature conditions (5, 10, 15, and 20°C) for 28 days. On the basis of obtained data concerning the survival of L. casei KN291 bacteria in soy beverage at different temperature and time conditions, two non-linear models (r(2)= 0.68-0.93) and two surface models (r(2)=0.76-0.79) were constructed; these models described the behaviour of the bacteria in the product to a satisfactory extent. Verification of the surface models was carried out utilizing the validation data - at 7°C during 28 days. It was found that applied models were well fitted and charged with small systematic errors, which is evidenced by accuracy factor - Af, bias factor - Bf and mean squared error - MSE. The constructed microbiological growth and survival models of L. casei KN291 in fermented soy beverage enable the estimation of products shelf life period, which in this case is defined by the requirement for the level of the bacteria to be above 10(6) CFU/cm(3). The constructed models may be useful as a tool for the manufacture of probiotic foods to estimate of their shelf life period.

  16. Contractor-style tunnel cost estimating

    International Nuclear Information System (INIS)

    Scapuzzi, D.

    1990-06-01

    Keeping pace with recent advances in construction technology is a challenge for the cost estimating engineer. Using an estimating style that simulates the actual construction process and is similar in style to the contractor's estimate will give a realistic view of underground construction costs. For a contractor-style estimate, a mining method is chosen; labor crews, plant and equipment are selected, and advance rates are calculated for the various phases of work which are used to determine the length of time necessary to complete each phase of work. The durations are multiplied by the cost or labor and equipment per unit of time and, along with the costs for materials and supplies, combine to complete the estimate. Variations in advance rates, ground support, labor crew size, or other areas are more easily analyzed for their overall effect on the cost and schedule of a project. 14 figs

  17. ESTIMATING INJURIOUS IMPACT IN CONSTRUCTION LIFE CYCLE ASSESSMENTS: A PROSPECTIVE STUDY

    Directory of Open Access Journals (Sweden)

    McDevitt, James E.

    2012-04-01

    Full Text Available This paper is the result of a desire to include social factors alongside environmental and economic considerations in Life Cycle Assessment studies for the construction sector. We describe a specific search for a method to include injurious impact for construction Life Cycle Assessment studies, by evaluating a range of methods and data sources. A simple case study using selected Accident Compensation Corporation information illustrates that data relating to injury could provide a compelling evidence to cause changes in construction supply chains, and could provide an economic motive to pursue further research in this area. The paper concludes that limitations notwithstanding, the suggested approach could be useful as a fast and cheap high level tool that can accelerate the discussions and research agenda that will bring about the inclusion of social metrics in construction sector supply chain management and declarations.

  18. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    Science.gov (United States)

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist

  19. Model-based mean square error estimators for k-nearest neighbour predictions and applications using remotely sensed data for forest inventories

    Science.gov (United States)

    Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo

    2009-01-01

    New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...

  20. Constructing an everywhere and locally relevant predictive model of the West-African critical zone

    Science.gov (United States)

    Hector, B.; Cohard, J. M.; Pellarin, T.; Maxwell, R. M.; Cappelaere, B.; Demarty, J.; Grippa, M.; Kergoat, L.; Lebel, T.; Mamadou, O.; Mougin, E.; Panthou, G.; Peugeot, C.; Vandervaere, J. P.; Vischel, T.; Vouillamoz, J. M.

    2017-12-01

    Considering water resources and hydrologic hazards, West Africa is among the most vulnerable regions to face both climatic (e.g. with the observed intensification of precipitation) and anthropogenic changes. With +3% of demographic rate, the region experiences rapid land use changes and increased pressure on surface and groundwater resources with observed consequences on the hydrological cycle (water table rise result of the sahelian paradox, increase in flood occurrence, etc.) Managing large hydrosystems (such as transboundary aquifers or rivers basins as the Niger river) requires anticipation of such changes. However, the region significantly lacks observations, for constructing and validating critical zone (CZ) models able to predict future hydrologic regime, but also comprises hydrosystems which encompass strong environmental gradients (e.g. geological, climatic, ecological) with highly different dominating hydrological processes. We address these issues by constructing a high resolution (1 km²) regional scale physically-based model using ParFlow-CLM which allows modeling a wide range of processes without prior knowledge on their relative dominance. Our approach combines multiple scale modeling from local to meso and regional scales within the same theoretical framework. Local and meso-scale models are evaluated thanks to the rich AMMA-CATCH CZ observation database which covers 3 supersites with contrasted environments in Benin (Lat.: 9.8°N), Niger (Lat.: 13.3°N) and Mali (Lat.: 15.3°N). At the regional scale the lack of relevant map of soil hydrodynamic parameters is addressed using remote sensing data assimilation. Our first results show the model's ability to reproduce the known dominant hydrological processes (runoff generation, ET, groundwater recharge…) across the major West-African regions and allow us to conduct virtual experiments to explore the impact of global changes on the hydrosystems. This approach is a first step toward the construction of

  1. The future of forests and orangutans (Pongo abelii) in Sumatra: predicting impacts of oil palm plantations, road construction, and mechanisms for reducing carbon emissions from deforestation

    Science.gov (United States)

    Gaveau, David L. A.; Wich, Serge; Epting, Justin; Juhn, Daniel; Kanninen, Markku; Leader-Williams, Nigel

    2009-09-01

    Payments for reduced carbon emissions from deforestation (RED) are now attracting attention as a way to halt tropical deforestation. Northern Sumatra comprises an area of 65 000 km2 that is both the site of Indonesia's first planned RED initiative, and the stronghold of 92% of remaining Sumatran orangutans. Under current plans, this RED initiative will be implemented in a defined geographic area, essentially a newly established, 7500 km2 protected area (PA) comprising mostly upland forest, where guards will be recruited to enforce forest protection. Meanwhile, new roads are currently under construction, while companies are converting lowland forests into oil palm plantations. This case study predicts the effectiveness of RED in reducing deforestation and conserving orangutans for two distinct scenarios: the current plan of implementing RED within the specific boundary of a new upland PA, and an alternative scenario of implementing RED across landscapes outside PAs. Our satellite-based spatially explicit deforestation model predicts that 1313 km2 of forest would be saved from deforestation by 2030, while forest cover present in 2006 would shrink by 22% (7913 km2) across landscapes outside PAs if RED were only to be implemented in the upland PA. Meanwhile, orangutan habitat would reduce by 16% (1137 km2), resulting in the conservative loss of 1384 orangutans, or 25% of the current total population with or without RED intervention. By contrast, an estimated 7824 km2 of forest could be saved from deforestation, with maximum benefit for orangutan conservation, if RED were to be implemented across all remaining forest landscapes outside PAs. Here, RED payments would compensate land users for their opportunity costs in not converting unprotected forests into oil palm, while the construction of new roads to service the marketing of oil palm would be halted. Our predictions suggest that Indonesia's first RED initiative in an upland PA may not significantly reduce

  2. The future of forests and orangutans (Pongo abelii) in Sumatra: predicting impacts of oil palm plantations, road construction, and mechanisms for reducing carbon emissions from deforestation

    International Nuclear Information System (INIS)

    Gaveau, David L A; Leader-Williams, Nigel; Wich, Serge; Epting, Justin; Juhn, Daniel; Kanninen, Markku

    2009-01-01

    Payments for reduced carbon emissions from deforestation (RED) are now attracting attention as a way to halt tropical deforestation. Northern Sumatra comprises an area of 65 000 km 2 that is both the site of Indonesia's first planned RED initiative, and the stronghold of 92% of remaining Sumatran orangutans. Under current plans, this RED initiative will be implemented in a defined geographic area, essentially a newly established, 7500 km 2 protected area (PA) comprising mostly upland forest, where guards will be recruited to enforce forest protection. Meanwhile, new roads are currently under construction, while companies are converting lowland forests into oil palm plantations. This case study predicts the effectiveness of RED in reducing deforestation and conserving orangutans for two distinct scenarios: the current plan of implementing RED within the specific boundary of a new upland PA, and an alternative scenario of implementing RED across landscapes outside PAs. Our satellite-based spatially explicit deforestation model predicts that 1313 km 2 of forest would be saved from deforestation by 2030, while forest cover present in 2006 would shrink by 22% (7913 km 2 ) across landscapes outside PAs if RED were only to be implemented in the upland PA. Meanwhile, orangutan habitat would reduce by 16% (1137 km 2 ), resulting in the conservative loss of 1384 orangutans, or 25% of the current total population with or without RED intervention. By contrast, an estimated 7824 km 2 of forest could be saved from deforestation, with maximum benefit for orangutan conservation, if RED were to be implemented across all remaining forest landscapes outside PAs. Here, RED payments would compensate land users for their opportunity costs in not converting unprotected forests into oil palm, while the construction of new roads to service the marketing of oil palm would be halted. Our predictions suggest that Indonesia's first RED initiative in an upland PA may not significantly reduce

  3. Seismic prediction ahead of tunnel construction using Rayleigh-waves

    OpenAIRE

    Jetschny, Stefan; De Nil, Denise; Bohlen, Thomas

    2008-01-01

    To increase safety and efficiency of tunnel constructions, online seismic exploration ahead of a tunnel can become a valuable tool. We developed a new forward looking seismic imaging technique e.g. to determine weak and water bearing zones ahead of the constructions. Our approach is based on the excitation and registration of tunnel surface-waves. These waves are excited at the tunnel face behind the cutter head of a tunnel boring machine and travel into drilling direction. Arriving at the fr...

  4. Nuclear power plant construction activity, 1986

    International Nuclear Information System (INIS)

    1987-01-01

    Cost estimates, chronological data on construction progress, and the physical characteristics of nuclear units in commercial operation and units in the construction pipeline as of December 31, 1986, are presented. This report, which is updated annually, was prepared to provide an overview of the nuclear power plant construction industry. The report contains information on the status of nuclear generating units, average construction costs and lead-times, and construction milestones for individual reactors

  5. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  6. 48 CFR 36.203 - Government estimate of construction costs.

    Science.gov (United States)

    2010-10-01

    ... personnel whose official duties require knowledge of the estimate. An exception to this rule may be made... necessary to arrive at a fair and reasonable price. The overall amount of the Government's estimate shall...

  7. Perfect and Periphrastic Passive Constructions in Danish

    DEFF Research Database (Denmark)

    Bjerre, Tavs; Bjerre, Anne

    2007-01-01

    This paper gives an account of the event and argument structure of past participles and the linking between argument structure and valence structure. It further accounts for how participles form perfect and passiv constructions with auxiliaries. We assume that the same participle form is used...... in both types of construction. Our claim is that the valence structure of a past participle is predictable from its semantic type, and that the valence structure predicts with which auciliary a past participle combines in perfect constructions and whether the past participle may occur in passiv...

  8. Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm

    Science.gov (United States)

    Molchan, G.; Romashkova, L.

    2010-12-01

    The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.

  9. Vertebral body spread in thoracolumbar burst fractures can predict posterior construct failure.

    Science.gov (United States)

    De Iure, Federico; Lofrese, Giorgio; De Bonis, Pasquale; Cultrera, Francesco; Cappuccio, Michele; Battisti, Sofia

    2018-06-01

    The load sharing classification (LSC) laid foundations for a scoring system able to indicate which thoracolumbar fractures, after short-segment posterior-only fixations, would need longer instrumentations or additional anterior supports. We analyzed surgically treated thoracolumbar fractures, quantifying the vertebral body's fragment displacement with the aim of identifying a new parameter that could predict the posterior-only construct failure. This is a retrospective cohort study from a single institution. One hundred twenty-one consecutive patients were surgically treated for thoracolumbar burst fractures. Grade of kyphosis correction (GKC) expressed radiological outcome; Oswestry Disability Index and visual analog scale were considered. One hundred twenty-one consecutive patients who underwent posterior fixation for unstable thoracolumbar burst fractures were retrospectively evaluated clinically and radiologically. Supplementary anterior fixations were performed in 34 cases with posterior instrumentation failure, determined on clinic-radiological evidence or symptomatic loss of kyphosis correction. Segmental kyphosis angle and GKC were calculated according to the Cobb method. The displacement of fracture fragments was obtained from the mean of the adjacent end plate areas subtracted from the area enclosed by the maximum contour of vertebral fragmentation. The "spread" was derived from the ratio between this subtraction and the mean of the adjacent end plate areas. Analysis of variance, Mann-Whitney, and receiver operating characteristic were performed for statistical analysis. The authors report no conflict of interest concerning the materials or methods used in the present study or the findings specified in this paper. No funds or grants have been received for the present study. The spread revealed to be a helpful quantitative measurement of vertebral body fragment displacement, easily reproducible with the current computed tomography (CT) imaging technologies

  10. Parent- and Self-Reported Dimensions of Oppositionality in Youth: Construct Validity, Concurrent Validity, and the Prediction of Criminal Outcomes in Adulthood

    Science.gov (United States)

    Aebi, Marcel; Plattner, Belinda; Metzke, Christa Winkler; Bessler, Cornelia; Steinhausen, Hans-Christoph

    2013-01-01

    Background: Different dimensions of oppositional defiant disorder (ODD) have been found as valid predictors of further mental health problems and antisocial behaviors in youth. The present study aimed at testing the construct, concurrent, and predictive validity of ODD dimensions derived from parent- and self-report measures. Method: Confirmatory…

  11. Can administrative health utilisation data provide an accurate diabetes prevalence estimate for a geographical region?

    Science.gov (United States)

    Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin

    2018-05-01

    To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Airborne sound transmission loss characteristics of wood-frame construction

    Science.gov (United States)

    Rudder, F. F., Jr.

    1985-03-01

    This report summarizes the available data on the airborne sound transmission loss properties of wood-frame construction and evaluates the methods for predicting the airborne sound transmission loss. The first part of the report comprises a summary of sound transmission loss data for wood-frame interior walls and floor-ceiling construction. Data bases describing the sound transmission loss characteristics of other building components, such as windows and doors, are discussed. The second part of the report presents the prediction of the sound transmission loss of wood-frame construction. Appropriate calculation methods are described both for single-panel and for double-panel construction with sound absorption material in the cavity. With available methods, single-panel construction and double-panel construction with the panels connected by studs may be adequately characterized. Technical appendices are included that summarize laboratory measurements, compare measurement with theory, describe details of the prediction methods, and present sound transmission loss data for common building materials.

  13. Hardness prediction of HAZ in temper bead welding by non-consistent layer technique

    International Nuclear Information System (INIS)

    Yu, Lina; Saida, Kazuyoshi; Mochizuki, Masahito; Kameyama, Masashi; Chigusa, Naoki; Nishimoto, Kazutoshi

    2014-01-01

    Based on the experimentally obtained hardness database, the neural network-based hardness prediction system of heat affect zone (HAZ) in temper bead welding by Consistent Layer (CSL) technique has been constructed by the authors. However in practical operation, CSL technique is sometimes difficult to perform because of difficulty of the precise heat input controlling, and in such case non-CSL techniques are mainly used in the actual repair process. Therefore in the present study, the neural network-based hardness prediction system of HAZ in temper bead welding by non-CSL techniques has been constructed through thermal cycle simplification, from the view of engineering. The hardness distribution in HAZ with non-CSL techniques was calculated based on the thermal cycles numerically obtained by finite element method. The experimental result has shown that the predicted hardness is in good accordance with the measured ones. It follows that the currently proposed method is effective for estimating the tempering effect during temper bead welding by non-CSL techniques. (author)

  14. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  15. Estimating building energy consumption using extreme learning machine method

    International Nuclear Information System (INIS)

    Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey

    2016-01-01

    The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.

  16. Automatic performance estimation of conceptual temperature control system design for rapid development of real system

    International Nuclear Information System (INIS)

    Jang, Yu Jin

    2013-01-01

    This paper presents an automatic performance estimation scheme of conceptual temperature control system with multi-heater configuration prior to constructing the physical system for achieving rapid validation of the conceptual design. An appropriate low-order discrete-time model, which will be used in the controller design, is constructed after determining several basic factors including the geometric shape of controlled object and heaters, material properties, heater arrangement, etc. The proposed temperature controller, which adopts the multivariable GPC (generalized predictive control) scheme with scale factors, is then constructed automatically based on the above model. The performance of the conceptual temperature control system is evaluated by using a FEM (finite element method) simulation combined with the controller.

  17. Automatic performance estimation of conceptual temperature control system design for rapid development of real system

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Yu Jin [Dongguk University, GyeongJu (Korea, Republic of)

    2013-07-15

    This paper presents an automatic performance estimation scheme of conceptual temperature control system with multi-heater configuration prior to constructing the physical system for achieving rapid validation of the conceptual design. An appropriate low-order discrete-time model, which will be used in the controller design, is constructed after determining several basic factors including the geometric shape of controlled object and heaters, material properties, heater arrangement, etc. The proposed temperature controller, which adopts the multivariable GPC (generalized predictive control) scheme with scale factors, is then constructed automatically based on the above model. The performance of the conceptual temperature control system is evaluated by using a FEM (finite element method) simulation combined with the controller.

  18. Direction for the Estimation of Required Resources for Nuclear Power Plant Decommissioning based on BIM via Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Insu [Korea Institute of Construction Technology, Goyang (Korea, Republic of); Kim, Woojung [KHNP-Central Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Ways to estimate decommissioning of required resources in the past have imposed great uncertainty since they analyze required resources at the construction stage, analyzing and consulting decommissioning required resources of overseas nuclear power plants. As demands on efficient management and use of complicated construction information increased these days, demands on the introduction of Building Information Modeling (herein after referred to as BIM) technology has increased. In the area of quotation, considerable effects are expected as to the accuracy and reliability predicting construction costs through the characteristics that can automatically estimate quantities by using attribute information of BIM model. BIM-based estimation and quotation of required resources is more accurate than the existing 2D-based quotations and have many advantages such as reviews over constructability and interference. It can be desirable to estimate decommissioning required resources in nuclear power plants using BIM as well as using tools that are compatible with usual international/industrial standards. As we looked into the cases where required resources were estimated, using BIM in Korea and abroad, they dealt with estimation of required resources, estimation of construction cost and process management at large. In each area, methodologies, classification systems, BIM, and realization tests have been used variably. Nonetheless, several problems have been reported, and among them, it is noticeable that although BIM standard classification system exists, no case was found that has used standard classification system. This means that no interlink among OBS (Object Breakdown Structure), WBS (Work Breakdown Structure) and CBS (Cost Breakdown Structure) was possible. Thus, for nuclear power plant decommissioning, decommissioning method and process, etc. shall be defined clearly in the stage of decommissioning strategy establishment, so that classification systems must be set up

  19. Direction for the Estimation of Required Resources for Nuclear Power Plant Decommissioning based on BIM via Case Study

    International Nuclear Information System (INIS)

    Jung, Insu; Kim, Woojung

    2014-01-01

    Ways to estimate decommissioning of required resources in the past have imposed great uncertainty since they analyze required resources at the construction stage, analyzing and consulting decommissioning required resources of overseas nuclear power plants. As demands on efficient management and use of complicated construction information increased these days, demands on the introduction of Building Information Modeling (herein after referred to as BIM) technology has increased. In the area of quotation, considerable effects are expected as to the accuracy and reliability predicting construction costs through the characteristics that can automatically estimate quantities by using attribute information of BIM model. BIM-based estimation and quotation of required resources is more accurate than the existing 2D-based quotations and have many advantages such as reviews over constructability and interference. It can be desirable to estimate decommissioning required resources in nuclear power plants using BIM as well as using tools that are compatible with usual international/industrial standards. As we looked into the cases where required resources were estimated, using BIM in Korea and abroad, they dealt with estimation of required resources, estimation of construction cost and process management at large. In each area, methodologies, classification systems, BIM, and realization tests have been used variably. Nonetheless, several problems have been reported, and among them, it is noticeable that although BIM standard classification system exists, no case was found that has used standard classification system. This means that no interlink among OBS (Object Breakdown Structure), WBS (Work Breakdown Structure) and CBS (Cost Breakdown Structure) was possible. Thus, for nuclear power plant decommissioning, decommissioning method and process, etc. shall be defined clearly in the stage of decommissioning strategy establishment, so that classification systems must be set up

  20. How Do Different Aspects of Spatial Skills Relate to Early Arithmetic and Number Line Estimation?

    Directory of Open Access Journals (Sweden)

    Véronique Cornu

    2017-12-01

    Full Text Available The present study investigated the predictive role of spatial skills for arithmetic and number line estimation in kindergarten children (N = 125. Spatial skills are known to be related to mathematical development, but due to the construct’s non-unitary nature, different aspects of spatial skills need to be differentiated. In the present study, a spatial orientation task, a spatial visualization task and visuo-motor integration task were administered to assess three different aspects of spatial skills. Furthermore, we assessed counting abilities, knowledge of Arabic numerals, quantitative knowledge, as well as verbal working memory and verbal intelligence in kindergarten. Four months later, the same children performed an arithmetic and a number line estimation task to evaluate how the abilities measured at Time 1 predicted early mathematics outcomes. Hierarchical regression analysis revealed that children’s performance in arithmetic was predicted by their performance on the spatial orientation and visuo-motor integration task, as well as their knowledge of the Arabic numerals. Performance in number line estimation was significantly predicted by the children’s spatial orientation performance. Our findings emphasize the role of spatial skills, notably spatial orientation, in mathematical development. The relation between spatial orientation and arithmetic was partially mediated by the number line estimation task. Our results further show that some aspects of spatial skills might be more predictive of mathematical development than others, underlining the importance to differentiate within the construct of spatial skills when it comes to understanding numerical development.

  1. Auditing of suppliers as the requirement of quality management systems in construction

    Science.gov (United States)

    Harasymiuk, Jolanta; Barski, Janusz

    2017-07-01

    The choice of a supplier of construction materials can be important factor of increase or reduction of building works costs. Construction materials present from 40 for 70% of investment task depending on kind of works being provided for realization. There is necessity of estimate of suppliers from the point of view of effectiveness of construction undertaking and necessity from the point of view of conformity of taken operation by executives of construction job and objects within the confines of systems of managements quality being initiated in their organizations. The estimate of suppliers of construction materials and subexecutives of special works is formal requirement in quality management systems, which meets the requirements of the ISO 9001 standard. The aim of this paper is to show possibilities of making use of anaudit for estimate of credibility and reliability of the supplier of construction materials. The article describes kinds of audits, that were carried in quality management systems, with particular taking into consideration audits called as second-site. One characterizes the estimate criterions of qualitative ability and method of choice of the supplier of construction materials. The paper shows also propositions of exemplary questions, that would be estimated in audit process, the way of conducting of this estimate and conditionality of estimate.

  2. A first look at roadheader construction and estimating techniques for site characterization at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Neil, D.M.; Taylor, D.L.

    1991-01-01

    The Yucca Mountain site characterization program will be based on mechanical excavation techniques for the mined repository construction and development. Tunnel Boring Machines (TBM's), Mobile Miners (MM), Raiseborers (RB), Blind Hole Shaft Boring Machines (BHSB), and Roadheaders (RH) have been selected as the mechanical excavation machines most suited to mine the densely welded and non-welded tuffs of the Topopah Springs and Calico Hills members. Heavy duty RH in the 70 to 100 ton class with 300 Kw cutter motors have been evaluated and formulas developed to predict machine performance based on the rock physical properties and the results of Linear Cutting Machine (LCM) tests done at the Colorado School of Mines (CSM) for Sandia National Labs. (SNL)

  3. Seismic prediction ahead of tunnel constructions

    Science.gov (United States)

    Jetschny, S.; Bohlen, T.; Nil, D. D.; Giese, R.

    2007-12-01

    To increase safety and efficiency of tunnel constructions, online seismic exploration ahead of a tunnel can become a valuable tool. Within the \\it OnSite project founded by the BMBF (German Ministry of Education and Research) within \\it GeoTechnologien a new forward looking seismic imaging technique is developed to e.g. determine weak and water bearing zones ahead of the constructions. Our approach is based on the excitation and registration of \\it tunnel surface waves. These waves are excited at the tunnel face behind the cutter head of a tunnel boring machine and travel into drilling direction. Arriving at the front face they generate body waves (mainly S-waves) propagating further ahead. Reflected S-waves are back- converted into tunnel surface waves. For a theoretical description of the conversion process and for finding optimal acquisition geometries it is of importance to study the propagation characteristics of tunnel surface waves. 3D seismic finite difference modeling and analytic solutions of the wave equation in cylindric coordinates revealed that at higher frequencies, i.e. if the tunnel diameter is significantly larger than the wavelength of S-waves, these surface waves can be regarded as Rayleigh-waves circulating the tunnel. For smaller frequencies, i.e. when the S-wavelength approaches the tunnel diameter, the propagation characteristics of these surface waves are then similar to S- waves. Field measurements performed by the GeoForschungsZentrum Potsdam, Germany at the Gotthard Base Tunnel (Switzerland) show both effects, i.e. the propagation of Rayleigh- and body-wave like waves along the tunnel. To enhance our understanding of the excitation and propagation characteristics of tunnel surface waves the transition of Rayleigh to tube-waves waves is investigated both analytically and by numerical simulations.

  4. Online peak power prediction based on a parameter and state estimator for lithium-ion batteries in electric vehicles

    International Nuclear Information System (INIS)

    Pei, Lei; Zhu, Chunbo; Wang, Tiansi; Lu, Rengui; Chan, C.C.

    2014-01-01

    The goal of this study is to realize real-time predictions of the peak power/state of power (SOP) for lithium-ion batteries in electric vehicles (EVs). To allow the proposed method to be applicable to different temperature and aging conditions, a training-free battery parameter/state estimator is presented based on an equivalent circuit model using a dual extended Kalman filter (DEKF). In this estimator, the model parameters are no longer taken as functions of factors such as SOC (state of charge), temperature, and aging; instead, all parameters will be directly estimated under the present conditions, and the impact of the temperature and aging on the battery model will be included in the parameter identification results. Then, the peak power/SOP will be calculated using the estimated results under the given limits. As an improvement to the calculation method, a combined limit of current and voltage is proposed to obtain results that are more reasonable. Additionally, novel verification experiments are designed to provide the true values of the cells' peak power under various operating conditions. The proposed methods are implemented in experiments with LiFePO 4 /graphite cells. The validating results demonstrate that the proposed methods have good accuracy and high adaptability. - Highlights: • A real-time peak power/SOP prediction method for lithium-ion batteries is proposed. • A training-free method based on DEKF is presented for parameter identification. • The proposed method can be applied to different temperature and aging conditions. • The calculation of peak power under the current and voltage limits is improved. • Validation experiments are designed to verify the accuracy of prediction results

  5. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  6. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  7. Quality assessment for recycling aggregates from construction and demolition waste: An image-based approach for particle size estimation.

    Science.gov (United States)

    Di Maria, Francesco; Bianconi, Francesco; Micale, Caterina; Baglioni, Stefano; Marionni, Moreno

    2016-02-01

    The size distribution of aggregates has direct and important effects on fundamental properties of construction materials such as workability, strength and durability. The size distribution of aggregates from construction and demolition waste (C&D) is one of the parameters which determine the degree of recyclability and therefore the quality of such materials. Unfortunately, standard methods like sieving or laser diffraction can be either very time consuming (sieving) or possible only in laboratory conditions (laser diffraction). As an alternative we propose and evaluate the use of image analysis to estimate the size distribution of aggregates from C&D in a fast yet accurate manner. The effectiveness of the procedure was tested on aggregates generated by an existing C&D mechanical treatment plant. Experimental comparison with manual sieving showed agreement in the range 81-85%. The proposed technique demonstrated potential for being used on on-line systems within mechanical treatment plants of C&D. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A Collision Risk Model to Predict Avian Fatalities at Wind Facilities: An Example Using Golden Eagles, Aquila chrysaetos.

    Science.gov (United States)

    New, Leslie; Bjerre, Emily; Millsap, Brian; Otto, Mark C; Runge, Michael C

    2015-01-01

    Wind power is a major candidate in the search for clean, renewable energy. Beyond the technical and economic challenges of wind energy development are environmental issues that may restrict its growth. Avian fatalities due to collisions with rotating turbine blades are a leading concern and there is considerable uncertainty surrounding avian collision risk at wind facilities. This uncertainty is not reflected in many models currently used to predict the avian fatalities that would result from proposed wind developments. We introduce a method to predict fatalities at wind facilities, based on pre-construction monitoring. Our method can directly incorporate uncertainty into the estimates of avian fatalities and can be updated if information on the true number of fatalities becomes available from post-construction carcass monitoring. Our model considers only three parameters: hazardous footprint, bird exposure to turbines and collision probability. By using a Bayesian analytical framework we account for uncertainties in these values, which are then reflected in our predictions and can be reduced through subsequent data collection. The simplicity of our approach makes it accessible to ecologists concerned with the impact of wind development, as well as to managers, policy makers and industry interested in its implementation in real-world decision contexts. We demonstrate the utility of our method by predicting golden eagle (Aquila chrysaetos) fatalities at a wind installation in the United States. Using pre-construction data, we predicted 7.48 eagle fatalities year-1 (95% CI: (1.1, 19.81)). The U.S. Fish and Wildlife Service uses the 80th quantile (11.0 eagle fatalities year-1) in their permitting process to ensure there is only a 20% chance a wind facility exceeds the authorized fatalities. Once data were available from two-years of post-construction monitoring, we updated the fatality estimate to 4.8 eagle fatalities year-1 (95% CI: (1.76, 9.4); 80th quantile, 6

  9. Partial Correlation Matrix Estimation using Ridge Penalty Followed by Thresholding and Reestimation

    Science.gov (United States)

    2014-01-01

    Summary Motivated by the problem of construction gene co-expression network, we propose a statistical framework for estimating high-dimensional partial correlation matrix by a three-step approach. We first obtain a penalized estimate of a partial correlation matrix using ridge penalty. Next we select the non-zero entries of the partial correlation matrix by hypothesis testing. Finally we reestimate the partial correlation coefficients at these non-zero entries. In the second step, the null distribution of the test statistics derived from penalized partial correlation estimates has not been established. We address this challenge by estimating the null distribution from the empirical distribution of the test statistics of all the penalized partial correlation estimates. Extensive simulation studies demonstrate the good performance of our method. Application on a yeast cell cycle gene expression data shows that our method delivers better predictions of the protein-protein interactions than the Graphic Lasso. PMID:24845967

  10. Estimation of available global solar radiation using sunshine duration over South Korea

    Science.gov (United States)

    Das, Amrita; Park, Jin-ki; Park, Jong-hwa

    2015-11-01

    Besides designing a solar energy system, accurate insolation data is also a key component for many biological and atmospheric studies. But solar radiation stations are not widely available due to financial and technical limitations; this insufficient number affects the spatial resolution whenever an attempt is made to construct a solar radiation map. There are several models in literature for estimating incoming solar radiation using sunshine fraction. Seventeen of such models among which 6 are linear and 11 non-linear, have been chosen for studying and estimating solar radiation on a horizontal surface over South Korea. The better performance of a non-linear model signifies the fact that the relationship between sunshine duration and clearness index does not follow a straight line. With such a model solar radiation over 79 stations measuring sunshine duration is computed and used as input for spatial interpolation. Finally monthly solar radiation maps are constructed using the Ordinary Kriging method. The cross validation results show good agreement between observed and predicted data.

  11. NUMERICAL AND ANALYTIC METHODS OF ESTIMATION BRIDGES’ CONSTRUCTIONS

    Directory of Open Access Journals (Sweden)

    Y. Y. Luchko

    2010-03-01

    Full Text Available In this article the numerical and analytical methods of calculation of the stressed-and-strained state of bridge constructions are considered. The task on increasing of reliability and accuracy of the numerical method and its solution by means of calculations in two bases are formulated. The analytical solution of the differential equation of deformation of a ferro-concrete plate under the action of local loads is also obtained.

  12. A Model of Contextual Motivation in Physical Education: Using Constructs from Self-Determination and Achievement Goal Theories To Predict Physical Activity Intentions.

    Science.gov (United States)

    Standage, Martyn; Duda, Joan L.; Ntoumanis, Nikos

    2003-01-01

    Examines a study of student motivation in physical education that incorporated constructs from achievement goal and self-determination theories. Self-determined motivation was found to positively predict, whereas amotivation was a negative predictor of leisure-time physical activity intentions. (Contains 86 references and 3 tables.) (GCP)

  13. A Model Suggestion to Predict Leverage Ratio for Construction Projects

    Directory of Open Access Journals (Sweden)

    Özlem Tüz

    2013-12-01

    Full Text Available Due to the nature, construction is an industry with high uncertainty and risk. Construction industry carries high leverage ratios. Firms with low equities work in big projects through progress payment system, but in this case, even a small negative in the planned cash flows constitute a major risk for the company.The use of leverage, with a small investment to achieve profit targets large-scale, high-profit, but also brings a high risk with it. Investors may lose all or the portion of the money. In this study, monitoring and measuring of the leverage ratio because of the displacement in cash inflows of construction projects which uses high leverage and low cash to do business in the sector is targeted. Cash need because of drifting the cash inflows may be seen due to the model. Work should be done in the early stages of the project with little capital but in the later stages, rapidly growing capital need arises.The values obtained from the model may be used to supply the capital held in the right time by anticipating the risks because of the delay in cashflow of construction projects which uses high leverage ratio.

  14. Prediction of elastic and acoustic behaviors of calcarenite used for construction of historical monuments of Rabat, Morocco

    Directory of Open Access Journals (Sweden)

    Abdelaali Rahmouni

    2017-02-01

    Full Text Available Natural materials (e.g. rocks and soils are porous media, whose microstructures present a wide diversity. They generally consist of a heterogeneous solid phase and a porous phase which may be fully or partially saturated with one or more fluids. The prediction of elastic and acoustic properties of porous materials is very important in many fields, such as physics of rocks, reservoir geophysics, civil engineering, construction field and study of the behavior of historical monuments. The aim of this work is to predict the elastic and acoustic behaviors of isotropic porous materials of a solid matrix containing dry, saturated and partially saturated spherical pores. For this, a homogenization technique based on the Mori–Tanaka model is presented to connect the elastic and acoustic properties to porosity and degree of water saturation. Non-destructive ultrasonic technique is used to determine the elastic properties from measurements of P-wave velocities. The results obtained show the influence of porosity and degree of water saturation on the effective properties. The various predictions of Mori–Tanaka model are then compared with experimental results for the elastic and acoustic properties of calcarenite.

  15. Economic assessment of the construction industry: A construction-economics nexus

    Science.gov (United States)

    Barber, Herbert Marion, Jr.

    The purpose of this study was to conduct an economic assessment of the construction industry. More specifically, this study addresses ambiguities within the literature that are associated with the construction-economics nexus. The researcher 1) investigated the relationships between economic indicators and stock prices of U.S. construction equipment manufacturers, 2) investigated the relationships between energy production, consumption, and corruption, and 3) determined the economic effect electricity generation and electricity consumption has on economies of scale. The researcher used descriptive and inferential statistics in this study and determined that economists, researchers, policy-makers, and others should have predicted the 2007-08 world economic collapse 5-6 years prior to realization of the event given that construction indices and GDP grossly regressed from statistically acceptable trends as early as 2002 and perhaps 2000. Substantiating this claim, the effect of the cost of construction materials and labor, i.e. construction index, on GDP was significant for years leading up to the collapse (1970-2007). Additionally, it was determined that energy production and consumption are predictors of governmental corruption in some countries. In the Republic of Botswana, for example, the researcher determined that energy production and consumption statistically jointly effected governmental corruption. In addition to determining statistical effect, a model for predicting governmental corruption was developed based on energy production and consumption volumes. Also, the researcher found that electricity generation in the 25 largest world economies had a statistically significant effect on GDP. Electricity consumption also had an effect on GDP, as well, but not on other economic indicators. More importantly than the quantitative findings, the researcher concluded that the construction-economics nexus is far more complex than most policy-makers realize. As such

  16. Waterworks are not fully utilized, more are to be constructed

    International Nuclear Information System (INIS)

    Zackova, K.

    2003-01-01

    Author deals the intention of Ministry of Land-management to build water dam Slatinka and water basins Garajky, Hroncek and Tichy Potok. The report about construction of water basins forecasts that in 2015 it will be the lack of water in capacity in amount of 1910 dm 3 /sec in Eastern Slovakia. Conception takes into account that average specific need of water is almost minimally 330 dm 3 per inhabitant and day, plans in 80's predicted consumption of 430 dm 3 in 1990. But the need of drinking water is descending from the beginning of 90's. The need of drinking water has descended in 30-40 per cent in the last decade. According to office of water management approximately 15 billion Slovak crowns are necessary for the construction of four new water dams. Estimation of expenses for construction of existing buildings was very inaccurate. E.g. water dam Zilina had to cost 4.2 billions Slovak crowns, but it cost lastly 13.2 billions Slovak crown and it is still not complete. Author notes that already built water basins are used only at 15-40%. It is speculated about support of construction of water basins from European Union funds. Consumption and production of drinking water in Slovak Republic are presented

  17. Addendum to the article: Misuse of null hypothesis significance testing: Would estimation of positive and negative predictive values improve certainty of chemical risk assessment?

    Science.gov (United States)

    Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf

    2015-03-01

    We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.

  18. Is The Ca + K + Mg/Al Ratio in the Soil Solution a Predictive Tool for Estimating Forest Damage?

    International Nuclear Information System (INIS)

    Goeransson, A.; Eldhuset, T. D.

    2001-01-01

    The ratio between (Ca +K +Mg) and Al in nutrient solution has been suggested as a predictive tool for estimating tree growth disturbance. However, the ratio is unspecific in the sense that it is based on several elements which are all essential for plant growth;each of these may be growth-limiting. Furthermore,aluminium retards growth at higher concentrations. Itis therefore difficult to give causal and objective biological explanations for possible growth disturbances. The importance of the proportion of base-cations to N, at a fixed base-cation/Al ratio, is evaluated with regard to growth of Picea abies.The uptake of elements was found to be selective; nutrients were taken up while most Al remained in solution. Biomass partitioning to the roots increased after aluminium addition with low proportions of basecations to nitrogen. We conclude that the low growthrates depend on nutrient limitation in these treatments. Low growth rates in the high proportion experiments may be explained by high internal Alconcentrations. The results strongly suggest that growth rate is not correlated with the ratio in the rooting medium and question the validity of using ratios as predictive tools for estimating forest damage. We suggest that growth limitation of Picea abies in the field may depend on low proportions of base cations to nitrate. It is therefore important to know the nutritional status of the plant material in relation to the growth potential and environmental limitation to be able to predict and estimate forest damage

  19. Earthquake Prediction Analysis Based on Empirical Seismic Rate: The M8 Algorithm

    International Nuclear Information System (INIS)

    Molchan, G.; Romashkova, L.

    2010-07-01

    The quality of space-time earthquake prediction is usually characterized by a two-dimensional error diagram (n,τ), where n is the rate of failures-to-predict and τ is the normalized measure of space-time alarm. The most reasonable space measure for analysis of a prediction strategy is the rate of target events λ(dg) in a sub-area dg. In that case the quantity H = 1-(n +τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n,τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M ≥ 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw ≥ 5.5, 1977-2004, and the magnitude range of target events 8.0 ≤ M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm. (author)

  20. Electron band theory predictions and the construction of phase diagrams

    International Nuclear Information System (INIS)

    Watson, R.E.; Bennett, L.H.; Davenport, J.W.; Weinert, M.

    1985-01-01

    The a priori theory of metals is yielding energy results which are relevant to the construction of phase diagrams - to the solution phases as well as to line compounds. There is a wide range in the rigor of the calculations currently being done and this is discussed. Calculations for the structural stabilities (fcc vs bcc vs hcp) of the elemental metals, quantities which are employed in the constructs of the terminal phases, are reviewed and shown to be inconsistent with the values currently employed in such constructs (also see Miodownik elsewhere in this volume). Finally, as an example, the calculated heats of formation are compared with experiment for PtHf, IrTa and OsW, three compounds with the same electron to atom ratio but different bonding properties

  1. Reliability of different mark-recapture methods for population size estimation tested against reference population sizes constructed from field data.

    Directory of Open Access Journals (Sweden)

    Annegret Grimm

    Full Text Available Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK. If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2. Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to

  2. Estimating Remineralized Phosphate and Its Remineralization Rate in the Northern East China Sea During Summer 1997: A Snapshot Study Before Three-Gorges Dam Construction

    Directory of Open Access Journals (Sweden)

    Hyun-Cheol Kim

    2016-01-01

    Full Text Available The northern East China Sea (a.k.a., “The South Sea” is a dynamic zone that exerts a variety of effects on the marine ecosystem due to Three-Gorges Dam construction. As the northern East China Sea region is vulnerable to climate forcing and anthropogenic impacts, it is important to investigate how the remineralization rate in the northern East China Sea has changed in response to such external forcing. We used an historical hydrographic dataset from August 1997 to obtain a baseline for future comparison. We estimate the amount of remineralized phosphate by decomposing the physical mixing and biogeochemical process effect using water column measurements (temperature, salinity, and phosphate. The estimated remineralized phosphate column inventory ranged from 0.8 to 42.4 mmol P m-2 (mean value of 15.2 ± 12.0 mmol P m-2. Our results suggest that the Tsushima Warm Current was a strong contributor to primary production during the summer of 1997 in the study area. The estimated summer (June - August remineralization rate in the region before Three-Gorges Dam construction was 18 ± 14 mmol C m-2 d-1.

  3. A BIM-based system for demolition and renovation waste estimation and planning

    International Nuclear Information System (INIS)

    Cheng, Jack C.P.; Ma, Lauren Y.H.

    2013-01-01

    Highlights: ► We developed a waste estimation system leveraging the BIM technology. ► The system can calculate waste disposal charging fee and pick-up truck demand. ► We presented an example scenario demonstrating this system. ► Automatic, time-saving and wide applicability are the features of the system. - Abstract: Due to the rising worldwide awareness of green environment, both government and contractors have to consider effective construction and demolition (C and D) waste management practices. The last two decades have witnessed the growing importance of demolition and renovation (D and R) works and the growing amount of D and R waste disposed to landfills every day, especially in developed cities like Hong Kong. Quantitative waste prediction is crucial for waste management. It can enable contractors to pinpoint critical waste generation processes and to plan waste control strategies. In addition, waste estimation could also facilitate some government waste management policies, such as the waste disposal charging scheme in Hong Kong. Currently, tools that can accurately and conveniently estimate the amount of waste from construction, renovation, and demolition projects are lacking. In the light of this research gap, this paper presents a building information modeling (BIM) based system that we have developed for estimation and planning of D and R waste. BIM allows multi-disciplinary information to be superimposed within one digital building model. Our system can extract material and volume information through the BIM model and integrate the information for detailed waste estimation and planning. Waste recycling and reuse are also considered in our system. Extracted material information can be provided to recyclers before demolition or renovation to make recycling stage more cooperative and more efficient. Pick-up truck requirements and waste disposal charging fee for different waste facilities will also be predicted through our system. The results

  4. A BIM-based system for demolition and renovation waste estimation and planning

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jack C.P., E-mail: cejcheng@ust.hk [Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology (Hong Kong); Ma, Lauren Y.H., E-mail: yingzi@ust.hk [Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology (Hong Kong)

    2013-06-15

    Highlights: ► We developed a waste estimation system leveraging the BIM technology. ► The system can calculate waste disposal charging fee and pick-up truck demand. ► We presented an example scenario demonstrating this system. ► Automatic, time-saving and wide applicability are the features of the system. - Abstract: Due to the rising worldwide awareness of green environment, both government and contractors have to consider effective construction and demolition (C and D) waste management practices. The last two decades have witnessed the growing importance of demolition and renovation (D and R) works and the growing amount of D and R waste disposed to landfills every day, especially in developed cities like Hong Kong. Quantitative waste prediction is crucial for waste management. It can enable contractors to pinpoint critical waste generation processes and to plan waste control strategies. In addition, waste estimation could also facilitate some government waste management policies, such as the waste disposal charging scheme in Hong Kong. Currently, tools that can accurately and conveniently estimate the amount of waste from construction, renovation, and demolition projects are lacking. In the light of this research gap, this paper presents a building information modeling (BIM) based system that we have developed for estimation and planning of D and R waste. BIM allows multi-disciplinary information to be superimposed within one digital building model. Our system can extract material and volume information through the BIM model and integrate the information for detailed waste estimation and planning. Waste recycling and reuse are also considered in our system. Extracted material information can be provided to recyclers before demolition or renovation to make recycling stage more cooperative and more efficient. Pick-up truck requirements and waste disposal charging fee for different waste facilities will also be predicted through our system. The results

  5. A Prediction on the Unit Cost Estimation for Decommissioning Activities Using the Experienced Data from DECOMMIS

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung Kook; Park, Hee Seong; Choi, Yoon Dong; Song, Chan Ho; Moon, Jei Kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    The KAERI (Korea Atomic Energy Research Institute) has developed the DECOMMIS (Decommissioning Information Management System) and have been applied for the decommissioning project of the KRR (Korea Research Reactor)-1 and 2 and UCP (Uranium Conversion Plant), as the meaning of the first decommissioning project in Korea. All information and data which are from the decommissioning activities are input, saved, output and managed in the DECOMMIS. This system was consists of the web server and the database server. The users could be access through a web page, depending on the input, processing and output, and be modified the permissions to do such activities can after the decommissioning activities have created the initial system-wide data is stored. When it could be used the experienced data from DECOMMIS, the cost estimation on the new facilities for the decommissioning planning will be established with the basic frame of the WBS structures and its codes. In this paper, the prediction on the cost estimation through using the experienced data which were store in DECOMMIS was studied. For the new decommissioning project on the nuclear facilities in the future, through this paper, the cost estimation for the decommissioning using the experienced data which were WBS codes, unit-work productivity factors and annual governmental unit labor cost is proposed. These data were from the KRR and UCP decommissioning project. The differences on the WBS code sectors and facility characterization between new objected components and experienced dismantled components was reduces as scaling factors. The study on the establishment the scaling factors and cost prediction for the cost estimation is developing with the algorithms from the productivity data, now.

  6. Estimating and Predicting Metal Concentration Using Online Turbidity Values and Water Quality Models in Two Rivers of the Taihu Basin, Eastern China.

    Science.gov (United States)

    Yao, Hong; Zhuang, Wei; Qian, Yu; Xia, Bisheng; Yang, Yang; Qian, Xin

    2016-01-01

    Turbidity (T) has been widely used to detect the occurrence of pollutants in surface water. Using data collected from January 2013 to June 2014 at eleven sites along two rivers feeding the Taihu Basin, China, the relationship between the concentration of five metals (aluminum (Al), titanium (Ti), nickel (Ni), vanadium (V), lead (Pb)) and turbidity was investigated. Metal concentration was determined using inductively coupled plasma mass spectrometry (ICP-MS). The linear regression of metal concentration and turbidity provided a good fit, with R(2) = 0.86-0.93 for 72 data sets collected in the industrial river and R(2) = 0.60-0.85 for 60 data sets collected in the cleaner river. All the regression presented good linear relationship, leading to the conclusion that the occurrence of the five metals are directly related to suspended solids, and these metal concentration could be approximated using these regression equations. Thus, the linear regression equations were applied to estimate the metal concentration using online turbidity data from January 1 to June 30 in 2014. In the prediction, the WASP 7.5.2 (Water Quality Analysis Simulation Program) model was introduced to interpret the transport and fates of total suspended solids; in addition, metal concentration downstream of the two rivers was predicted. All the relative errors between the estimated and measured metal concentration were within 30%, and those between the predicted and measured values were within 40%. The estimation and prediction process of metals' concentration indicated that exploring the relationship between metals and turbidity values might be one effective technique for efficient estimation and prediction of metal concentration to facilitate better long-term monitoring with high temporal and spatial density.

  7. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    Science.gov (United States)

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  8. Error estimation for CFD aeroheating prediction under rarefied flow condition

    Science.gov (United States)

    Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

    2014-12-01

    Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.

  9. Cost and schedule estimate to construct the tunnel and shaft remedial shielding concept, Los Alamos Meson Physics Facility, Los Alamos National Laboratory, Los Alamos, New Mexico. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-11-30

    The report provides an estimate of the cost and associated schedule to construct the tunnel and shaft remedial shielding concept. The cost and schedule estimate is based on a preliminary concept intended to address the potential radiation effects on Line D and Line Facilities in event of a beam spill. The construction approach utilizes careful tunneling methods based on available excavation and ground support technology. The tunneling rates and overall productivity on which the cost and project schedule are estimated are based on conservative assumptions with appropriate contingencies to address the uncertainty associated with geological conditions. The report is intended to provide supplemental information which will assist in assessing the feasibility of the tunnel and shaft concept and justification for future development of this particular aspect of remedial shielding for Line D and Line D Facilities.

  10. Estimation of respiratory heat flows in prediction of heat strain among Taiwanese steel workers.

    Science.gov (United States)

    Chen, Wang-Yi; Juang, Yow-Jer; Hsieh, Jung-Yu; Tsai, Perng-Jy; Chen, Chen-Peng

    2017-01-01

    International Organization for Standardization 7933 standard provides evaluation of required sweat rate (RSR) and predicted heat strain (PHS). This study examined and validated the approximations in these models estimating respiratory heat flows (RHFs) via convection (C res ) and evaporation (E res ) for application to Taiwanese foundry workers. The influence of change in RHF approximation to the validity of heat strain prediction in these models was also evaluated. The metabolic energy consumption and physiological quantities of these workers performing at different workloads under elevated wet-bulb globe temperature (30.3 ± 2.5 °C) were measured on-site and used in the calculation of RHFs and indices of heat strain. As the results show, the RSR model overestimated the C res for Taiwanese workers by approximately 3 % and underestimated the E res by 8 %. The C res approximation in the PHS model closely predicted the convective RHF, while the E res approximation over-predicted by 11 %. Linear regressions provided better fit in C res approximation (R 2  = 0.96) than in E res approximation (R 2  ≤ 0.85) in both models. The predicted C res deviated increasingly from the observed value when the WBGT reached 35 °C. The deviations of RHFs observed for the workers from those predicted using the RSR or PHS models did not significantly alter the heat loss via the skin, as the RHFs were in general of a level less than 5 % of the metabolic heat consumption. Validation of these approximations considering thermo-physiological responses of local workers is necessary for application in scenarios of significant heat exposure.

  11. Estimating confidence intervals in predicted responses for oscillatory biological models.

    Science.gov (United States)

    St John, Peter C; Doyle, Francis J

    2013-07-29

    The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently

  12. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  13. Bounding estimate of DWPF mercury emissions

    International Nuclear Information System (INIS)

    Jacobs, R.A.

    1993-01-01

    Two factors which have substantial impact on predicted Mercury emissions are the air flows in the Chemical Process Cell (CPC) and the exit temperature of the Formic Acid Vent Condenser (FAVC). The discovery in the IDMS (Integrated DWPF Melter System) of H 2 generation by noble metal catalyzed formic acid decomposition and the resultant required dilution air flow has increased the expected instantaneous CPC air flow by as much as a factor of four. In addition, IDMS has experienced higher than design (10 degrees C) FAVC exit temperatures during certain portions of the operating cycle. These temperatures were subsequently attributed to the exothermic reaction of NO to NO 2 . Moreover, evaluation of the DWPF FAVC indicated it was undersized and unless modified or replaced, routine exit temperatures would be in excess of design. Purges required for H 2 flammability control and verification of elevated FAVC exit temperatures due to NO x reactions have lead to significant changes in CPC operating conditions. Accordingly, mercury emissions estimates have been updated based upon the new operating requirements, IDMS experience, and development of an NO x /FAVC model which predicts FAVC exit temperatures. Using very conservative assumptions and maximum purge rates, the maximum calculated Hg emissions is approximately 130 lbs/yr. A range of 100 to 120 lbs/yr is conservatively predicted for other operating conditions. The peak emission rate calculated is 0.027 lbs/hr. The estimated DWPF Hg emissions for the construction permit are 175 lbs/yr (0.02 lbs/hr annual average)

  14. The future of forests and orangutans (Pongo abelii) in Sumatra: predicting impacts of oil palm plantations, road construction, and mechanisms for reducing carbon emissions from deforestation

    Energy Technology Data Exchange (ETDEWEB)

    Gaveau, David L A; Leader-Williams, Nigel [Durrell Institute of Conservation and Ecology, University of Kent, Canterbury, Kent CT2 7NR (United Kingdom); Wich, Serge [Great Apes Trust of Iowa, 4200 SE 44th Avenue, Des Moines, IA 50320 (United States); Epting, Justin; Juhn, Daniel [Center for Applied Biodiversity Science, Conservation International, 2011 Crystal Drive, Suite 500, Arlington, VA 22202 (United States); Kanninen, Markku, E-mail: dgaveau@yahoo.co.u, E-mail: swich@greatapetrust.or, E-mail: justep22@myfastmail.co, E-mail: d.juhn@conservation.or, E-mail: m.kanninen@cgiar.or, E-mail: n.leader-williams@kent.ac.u [Center for International Forestry Research, Jalan CIFOR, Situ Gede, Sidang Barang, Bogor, West Java (Indonesia)

    2009-09-15

    Payments for reduced carbon emissions from deforestation (RED) are now attracting attention as a way to halt tropical deforestation. Northern Sumatra comprises an area of 65 000 km{sup 2} that is both the site of Indonesia's first planned RED initiative, and the stronghold of 92% of remaining Sumatran orangutans. Under current plans, this RED initiative will be implemented in a defined geographic area, essentially a newly established, 7500 km{sup 2} protected area (PA) comprising mostly upland forest, where guards will be recruited to enforce forest protection. Meanwhile, new roads are currently under construction, while companies are converting lowland forests into oil palm plantations. This case study predicts the effectiveness of RED in reducing deforestation and conserving orangutans for two distinct scenarios: the current plan of implementing RED within the specific boundary of a new upland PA, and an alternative scenario of implementing RED across landscapes outside PAs. Our satellite-based spatially explicit deforestation model predicts that 1313 km{sup 2} of forest would be saved from deforestation by 2030, while forest cover present in 2006 would shrink by 22% (7913 km{sup 2}) across landscapes outside PAs if RED were only to be implemented in the upland PA. Meanwhile, orangutan habitat would reduce by 16% (1137 km{sup 2}), resulting in the conservative loss of 1384 orangutans, or 25% of the current total population with or without RED intervention. By contrast, an estimated 7824 km{sup 2} of forest could be saved from deforestation, with maximum benefit for orangutan conservation, if RED were to be implemented across all remaining forest landscapes outside PAs. Here, RED payments would compensate land users for their opportunity costs in not converting unprotected forests into oil palm, while the construction of new roads to service the marketing of oil palm would be halted. Our predictions suggest that Indonesia's first RED initiative in an

  15. Construction of a BAC library and identification of Dmrt1 gene of the rice field eel, Monopterus albus

    International Nuclear Information System (INIS)

    Jang Songhun; Zhou Fang; Xia Laixin; Zhao Wei; Cheng Hanhua; Zhou Rongjia

    2006-01-01

    A bacterial artificial chromosome (BAC) library was constructed using nuclear DNA from the rice field eel (Monopterus albus). The BAC library consists of a total of 33,000 clones with an average insert size of 115 kb. Based on the rice field eel haploid genome size of 600 Mb, the BAC library is estimated to contain approximately 6.3 genome equivalents and represents 99.8% of the genome of the rice field eel. This is first BAC library constructed from this species. To estimate the possibility of isolating a specific clone, high-density colony hybridization-based library screening was performed using Dmrt1 cDNA of the rice field eel as a probe. Both library screening and PCR identification results revealed three positive BAC clones which were overlapped, and formed a contig covering the Dmrt1 gene of 195 kb. By sequence comparisons with the Dmrt1 cDNA and sequencing of first four intron-exon junctions, Dmrt1 gene of the rice field eel was predicted to contain four introns and five exons. The sizes of first and second intron are 1.5 and 2.6 kb, respectively, and the sizes of last two introns were predicted to be about 20 kb. The Dmrt1 gene structure was conserved in evolution. These results also indicate that the BAC library is a useful resource for BAC contig construction and molecular isolation of functional genes

  16. Effectiveness of prediction equations in estimating energy expenditure sample of Brazilian and Spanish women with excess body weight

    OpenAIRE

    Lopes Rosado, Eliane; Santiago de Brito, Roberta; Bressan, Josefina; Martínez Hernández, José Alfredo

    2014-01-01

    Objective: To assess the adequacy of predictive equations for estimation of energy expenditure (EE), compared with the EE using indirect calorimetry in a sample of Brazilian and Spanish women with excess body weight Methods: It is a cross-sectional study with 92 obese adult women [26 Brazilian -G1- and 66 Spanish - G2- (aged 20-50)]. Weight and height were evaluated during fasting for the calculation of body mass index and predictive equations. EE was evaluated using the open-circuit indirect...

  17. Model calibration and parameter estimation for environmental and water resource systems

    CERN Document Server

    Sun, Ne-Zheng

    2015-01-01

    This three-part book provides a comprehensive and systematic introduction to the development of useful models for complex systems. Part 1 covers the classical inverse problem for parameter estimation in both deterministic and statistical frameworks, Part 2 is dedicated to system identification, hyperparameter estimation, and model dimension reduction, and Part 3 considers how to collect data and construct reliable models for prediction and decision-making. For the first time, topics such as multiscale inversion, stochastic field parameterization, level set method, machine learning, global sensitivity analysis, data assimilation, model uncertainty quantification, robust design, and goal-oriented modeling, are systematically described and summarized in a single book from the perspective of model inversion, and elucidated with numerical examples from environmental and water resources modeling. Readers of this book will not only learn basic concepts and methods for simple parameter estimation, but also get famili...

  18. Cost diviation in road construction projects: The case of Palestine

    Directory of Open Access Journals (Sweden)

    Ibrahim Mahamid

    2012-02-01

    Full Text Available This paper investigates the statistical relationship between actual and estimated cost of road construction projects using data from road construction projects awarded in the West Bank in Palestine over the years 2004–2008. The study is based on a sample of 169 road construction projects. Based on this data, regression models are developed. The findings reveal that 100% of projects suffer from cost diverge, it is found that 76% of projects have cost under estimation while 24% have cost over estimation. The discrepancy between estimated and actual cost has an average of 14.6%, ranging from -39% to 98%. The relation between the project size (length and width and the cost diverge is discussed.

  19. A novel body circumferences-based estimation of percentage body fat.

    Science.gov (United States)

    Lahav, Yair; Epstein, Yoram; Kedem, Ron; Schermann, Haggai

    2018-03-01

    Anthropometric measures of body composition are often used for rapid and cost-effective estimation of percentage body fat (%BF) in field research, serial measurements and screening. Our aim was to develop a validated estimate of %BF for the general population, based on simple body circumferences measures. The study cohort consisted of two consecutive samples of health club members, designated as 'development' (n 476, 61 % men, 39 % women) and 'validation' (n 224, 50 % men, 50 % women) groups. All subjects underwent anthropometric measurements as part of their registration to a health club. Dual-energy X-ray absorptiometry (DEXA) scan was used as the 'gold standard' estimate of %BF. Linear regressions where used to construct the predictive equation (%BFcal). Bland-Altman statistics, Lin concordance coefficients and percentage of subjects falling within 5 % of %BF estimate by DEXA were used to evaluate accuracy and precision of the equation. The variance inflation factor was used to check multicollinearity. Two distinct equations were developed for men and women: %BFcal (men)=10·1-0·239H+0·8A-0·5N; %BFcal (women)=19·2-0·239H+0·8A-0·5N (H, height; A, abdomen; N, neck, all in cm). Bland-Altman differences were randomly distributed and showed no fixed bias. Lin concordance coefficients of %BFcal were 0·89 in men and 0·86 in women. About 79·5 % of %BF predictions in both sexes were within ±5 % of the DEXA value. The Durnin-Womersley skinfolds equation was less accurate in our study group for prediction of %BF than %BFcal. We conclude that %BFcal offers the advantage of obtaining a reliable estimate of %BF from simple measurements that require no sophisticated tools and only a minimal prior training and experience.

  20. Case Study: A Real-Time Flood Forecasting System with Predictive Uncertainty Estimation for the Godavari River, India

    Directory of Open Access Journals (Sweden)

    Silvia Barbetta

    2016-10-01

    Full Text Available This work presents the application of the multi-temporal approach of the Model Conditional Processor (MCP-MT for predictive uncertainty (PU estimation in the Godavari River basin, India. MCP-MT is developed for making probabilistic Bayesian decision. It is the most appropriate approach if the uncertainty of future outcomes is to be considered. It yields the best predictive density of future events and allows determining the probability that a critical warning threshold may be exceeded within a given forecast time. In Bayesian decision-making, the predictive density represents the best available knowledge on a future event to address a rational decision-making process. MCP-MT has already been tested for case studies selected in Italian river basins, showing evidence of improvement of the effectiveness of operative real-time flood forecasting systems. The application of MCP-MT for two river reaches selected in the Godavari River basin, India, is here presented and discussed by considering the stage forecasts provided by a deterministic model, STAFOM-RCM, and hourly dataset based on seven monsoon seasons in the period 2001–2010. The results show that the PU estimate is useful for finding the exceedance probability for a given hydrometric threshold as function of the forecast time up to 24 h, demonstrating the potential usefulness for supporting real-time decision-making. Moreover, the expected value provided by MCP-MT yields better results than the deterministic model predictions, with higher Nash–Sutcliffe coefficients and lower error on stage forecasts, both in term of mean error and standard deviation and root mean square error.

  1. HIV infection in the South African construction industry.

    Science.gov (United States)

    Bowen, Paul; Govender, Rajen; Edwards, Peter; Lake, Antony

    2018-06-01

    South Africa has one of the highest HIV prevalences in the world, and compared with other sectors of the national economy, the construction industry is disproportionately adversely affected. Using data collected nationally from more than 57,000 construction workers, HIV infection among South African construction workers was estimated, together with an assessment of the association between worker HIV serostatus and worker characteristics of gender, age, nature of employment, occupation, and HIV testing history. The HIV infection of construction workers was estimated to be lower than that found in a smaller 2008 sample. All worker characteristics are significantly associated with HIV serostatus. In terms of most at-risk categories: females are more at risk of HIV infection than males; workers in the 30-49 year old age group are more at risk than other age groups; workers employed on a less permanent basis are more at risk; as are workers not having recently tested for HIV. Among occupations in the construction industry, general workers, artisans, and operator/drivers are those most at risk. Besides yielding more up-to-date estimated infection statistics, this research also identifies vulnerable sub-groups as valuable pointers for more targeted workplace interventions by construction firms.

  2. 23 CFR 635.115 - Agreement estimate.

    Science.gov (United States)

    2010-04-01

    ... CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.115 Agreement estimate. (a) Following the award of contract, an agreement estimate based on the contract unit prices and estimated quantities shall be...

  3. Estimation of erosion-accumulative processes at the Inia River's mouth near high-rise construction zones.

    Science.gov (United States)

    Sineeva, Natalya

    2018-03-01

    Our study relevance is due to the increasing man-made impact on water bodies and associated land resources within the urban areas, as a consequence, by a change in the morphology and dynamics of Rivers' canals. This leads to the need to predict the development of erosion-accumulation processes, especially within the built-up urban areas. Purpose of the study is to develop programs on the assessment of erosion-accumulation processes at a water body, a mouth area of the Inia River, in the of perspective high-rise construction zone of a residential microdistrict, the place, where floodplain-channel complex is intensively expected to develop. Results of the study: Within the velocities of the water flow comparing, full-scale measured conditions, and calculated from the model, a slight discrepancy was recorded. This allows us to say that the numerical model reliably describes the physical processes developing in the River. The carried out calculations to assess the direction and intensity of the channel re-formations, made us possible to conclude, there was an insignificant predominance of erosion processes over the accumulative ones on the undeveloped part of the Inia River (the processes activity is noticeable only in certain areas (by the coasts and the island)). Importance of the study: The study on the erosion-accumulation processes evaluation can be used in design decisions for the future high-rise construction of this territory, which will increase their economic efficiency.

  4. Estimated carotid-femoral pulse wave velocity has similar predictive value as measured carotid-femoral pulse wave velocity

    DEFF Research Database (Denmark)

    Olsen, Michael; Greve, Sara; Blicher, Marie

    2016-01-01

    OBJECTIVE: Carotid-femoral pulse wave velocity (cfPWV) adds significantly to traditional cardiovascular (CV) risk prediction, but is not widely available. Therefore, it would be helpful if cfPWV could be replaced by an estimated carotid-femoral pulse wave velocity (ePWV) using age and mean blood...... pressure and previously published equations. The aim of this study was to investigate whether ePWV could predict CV events independently of traditional cardiovascular risk factors and/or cfPWV. DESIGN AND METHOD: cfPWV was measured and ePWV calculated in 2366 apparently healthy subjects from four age...

  5. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  6. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)(Bled Slovenia)

    Science.gov (United States)

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  7. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test

    NARCIS (Netherlands)

    Stuiver, Martijn M.; Kampshoff, Caroline S.; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J. M.; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M.

    2017-01-01

    Objective: To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo2(peak)) and peak power output (W-peak).&

  8. 10-Year prospective study of noise exposure and hearing damage among construction workers.

    Science.gov (United States)

    Seixas, Noah S; Neitzel, Rick; Stover, Bert; Sheppard, Lianne; Feeney, Patrick; Mills, David; Kujawa, Sharon

    2012-09-01

    To characterise the effects of noise exposure, including intermittent and peaky exposure, on hearing damage as assessed by standard pure-tone thresholds and otoacoustic emissions, a longitudinal study was conducted on newly hired construction apprentices and controls over a 10-year period. Among the 456 subjects recruited at baseline, 316 had at least two (mean 4.6) examinations and were included in this analysis. Annual examinations included hearing threshold levels (HTLs) for air conducted pure tones and distortion product otoacoustic emission (DPOAE) amplitudes. Task-based occupational noise exposure levels and recreational exposures were estimated. Linear mixed models were fit for HTLs and DPOAEs at 3, 4 and 6 kHz in relation to time since baseline and average noise level since baseline, while controlling for hearing level at baseline and other risk factors. Estimated L(EQ) noise exposures were 87±3.6 dBA among the construction workers. Linear mixed modelling demonstrated significant exposure-related elevations in HTL of about 2-3 dB over a projected 10-year period at 3, 4 or 6 kHz for a 10 dB increase in exposure. The DPOAE models (using L1=40) predicted about 1 dB decrease in emission amplitude over 10 years for a 10 dB increase in exposure. The study provides evidence of noise-induced damage at an average exposure level around the 85 dBA level. The predicted change in HTLs was somewhat higher than would be predicted by standard hearing loss models, after accounting for hearing loss at baseline. Limited evidence for an enhanced effect of high peak component noise was observed, and DPOAEs, although similarly affected, showed no advantage over standard hearing threshold evaluation in detecting effects of noise on the ear and hearing.

  9. 10-Year prospective study of noise exposure and hearing damage among construction workers

    Science.gov (United States)

    Seixas, Noah S; Neitzel, Rick; Stover, Bert; Sheppard, Lianne; Feeney, Patrick; Mills, David; Kujawa, Sharon

    2015-01-01

    Objectives To characterise the effects of noise exposure, including intermittent and peaky exposure, on hearing damage as assessed by standard pure-tone thresholds and otoacoustic emissions, a longitudinal study was conducted on newly hired construction apprentices and controls over a 10-year period. Methods Among the 456 subjects recruited at baseline, 316 had at least two (mean 4.6) examinations and were included in this analysis. Annual examinations included hearing threshold levels (HTLs) for air conducted pure tones and distortion product otoacoustic emission (DPOAE) amplitudes. Task-based occupational noise exposure levels and recreational exposures were estimated. Linear mixed models were fit for HTLs and DPOAEs at 3, 4 and 6 kHz in relation to time since baseline and average noise level since baseline, while controlling for hearing level at baseline and other risk factors. Results Estimated LEQ noise exposures were 87±3.6 dBA among the construction workers. Linear mixed modelling demonstrated significant exposure-related elevations in HTL of about 2–3 dB over a projected 10-year period at 3, 4 or 6 kHz for a 10 dB increase in exposure. The DPOAE models (using L1=40) predicted about 1 dB decrease in emission amplitude over 10 years for a 10 dB increase in exposure. Conclusions The study provides evidence of noise-induced damage at an average exposure level around the 85 dBA level. The predicted change in HTLs was somewhat higher than would be predicted by standard hearing loss models, after accounting for hearing loss at baseline. Limited evidence for an enhanced effect of high peak component noise was observed, and DPOAEs, although similarly affected, showed no advantage over standard hearing threshold evaluation in detecting effects of noise on the ear and hearing. PMID:22693267

  10. Remaining useful life prediction based on variation coefficient consistency test of a Wiener process

    Directory of Open Access Journals (Sweden)

    Juan LI

    2018-01-01

    Full Text Available High-cost equipment is often reused after maintenance, and whether the information before the maintenance can be used for the Remaining Useful Life (RUL prediction after the maintenance is directly determined by the consistency of the degradation pattern before and after the maintenance. Aiming at this problem, an RUL prediction method based on the consistency test of a Wiener process is proposed. Firstly, the parameters of the Wiener process estimated by Maximum Likelihood Estimation (MLE are proved to be biased, and a modified unbiased estimation method is proposed and verified by derivation and simulations. Then, the h statistic is constructed according to the reciprocal of the variation coefficient of the Wiener process, and the sampling distribution is derived. Meanwhile, a universal method for the consistency test is proposed based on the sampling distribution theorem, which is verified by simulation data and classical crack degradation data. Finally, based on the consistency test of the degradation model, a weighted fusion RUL prediction method is presented for the fuel pump of an airplane, and the validity of the presented method is verified by accurate computation results of real data, which provides a theoretical and practical guidance for engineers to predict the RUL of equipment after maintenance.

  11. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    Science.gov (United States)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  12. A Model Suggestion to Predict Leverage Ratio for Construction Projects

    OpenAIRE

    Özlem Tüz; Şafak Ebesek

    2013-01-01

    Due to the nature, construction is an industry with high uncertainty and risk. Construction industry carries high leverage ratios. Firms with low equities work in big projects through progress payment system, but in this case, even a small negative in the planned cash flows constitute a major risk for the company.The use of leverage, with a small investment to achieve profit targets large-scale, high-profit, but also brings a high risk with it. Investors may lose all or the portion of th...

  13. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  14. Information resources for nuclear power construction

    International Nuclear Information System (INIS)

    Rozin, V.M.

    1987-01-01

    Problems of designing and introduction of data resources (in the form of magnetic carrier data) for the construction of NPP with the WWER-1000 type reactors are considered. It is suggested that the data resources should be supplied to the construction as independent production alongside with preliminary estimation and material resources. It is pointed out that the introduction of the data resource will permit to increase the efficiency of NPP construction management

  15. Passive-solar construction handbook

    Energy Technology Data Exchange (ETDEWEB)

    Levy, E.; Evans, D.; Gardstein, C.

    1981-02-01

    Many of the basic elements of passive solar design are reviewed. Passive solar construction is covered according to system type, each system type discussion including a general discussion of the important design and construction issues which apply to the particular system and case studies illustrating designed and built examples of the system type. The three basic types of passive solar systems discussed are direct gain, thermal storage wall, and attached sunspace. Thermal performance and construction information is presented for typical materials used in passive solar collector components, storage components, and control components. Appended are an overview of analysis methods and a technique for estimating performance. (LEW)

  16. Assessing the reliability, predictive and construct validity of historical, clinical and risk management-20 (HCR-20) in Mexican psychiatric inpatients.

    Science.gov (United States)

    Sada, Andrea; Robles-García, Rebeca; Martínez-López, Nicolás; Hernández-Ramírez, Rafael; Tovilla-Zarate, Carlos-Alfonso; López-Munguía, Fernando; Suárez-Alvarez, Enrique; Ayala, Xochitl; Fresán, Ana

    2016-08-01

    Assessing dangerousness to gauge the likelihood of future violent behaviour has become an integral part of clinical mental health practice in forensic and non-forensic psychiatric settings, one of the most effective instruments for this being the Historical, Clinical and Risk Management-20 (HCR-20). To examine the HCR-20 factor structure in Mexican psychiatric inpatients and to obtain its predictive validity and reliability for use in this population. In total, 225 patients diagnosed with psychotic, affective or personality disorders were included. The HCR-20 was applied at hospital admission and violent behaviours were assessed during psychiatric hospitalization using the Overt Aggression Scale (OAS). Construct validity, predictive validity and internal consistency were determined. Violent behaviour remains more severe in patients classified in the high-risk group during hospitalization. Fifteen items displayed adequate communalities in the original designated domains of the HCR-20 and internal consistency of the instruments was high. The HCR-20 is a suitable instrument for predicting violence risk in Mexican psychiatric inpatients.

  17. Development of a prediction model for the cost saving potentials in implementing the building energy efficiency rating certification

    International Nuclear Information System (INIS)

    Jeong, Jaewook; Hong, Taehoon; Ji, Changyoon; Kim, Jimin; Lee, Minhyun; Jeong, Kwangbok; Koo, Choongwan

    2017-01-01

    Highlights: • This study evaluates the building energy efficiency rating (BEER) certification. • Prediction model was developed for cost saving potentials by the BEER certification. • Prediction model was developed using LCC analysis, ROV, and Monte Carlo simulation. • Cost saving potential was predicted to be 2.78–3.77% of the construction cost. • Cost saving potential can be used for estimating the investment value of BEER. - Abstract: Building energy efficiency rating (BEER) certification is an energy performance certificates (EPCs) in South Korea. It is critical to examine the cost saving potentials of the BEER-certification in advance. This study aimed to develop a prediction model for the cost saving potentials in implementing the BEER-certification, in which the cost saving potentials included the energy cost savings of the BEER-certification and the relevant CO_2 emissions reduction as well as the additional construction cost for the BEER-certification. The prediction model was developed by using data mining, life cycle cost analysis, real option valuation, and Monte Carlo simulation. The database were established with 437 multi-family housing complexes (MFHCs), including 116 BEER-certified MFHCs and 321 non-certified MFHCs. The case study was conducted to validate the developed prediction model using 321 non-certified MFHCs, which considered 20-year life cycle. As a result, compared to the additional construction cost, the average cost saving potentials of the 1st-BEER-certified MFHCs in Groups 1, 2, and 3 were predicted to be 3.77%, 2.78%, and 2.87%, respectively. The cost saving potentials can be used as a guideline for the additional construction cost of the BEER-certification in the early design phase.

  18. Estimation of stream conditions in tributaries of the Klamath River, northern California

    Science.gov (United States)

    Manhard, Christopher V.; Som, Nicholas A.; Jones, Edward C.; Perry, Russell W.

    2018-01-01

    Because of their critical ecological role, stream temperature and discharge are requisite inputs for models of salmonid population dynamics. Coho Salmon inhabiting the Klamath Basin spend much of their freshwater life cycle inhabiting tributaries, but environmental data are often absent or only seasonally available at these locations. To address this information gap, we constructed daily averaged water temperature models that used simulated meteorological data to estimate daily tributary temperatures, and we used flow differentials recorded on the mainstem Klamath River to estimate daily tributary discharge. Observed temperature data were available for fourteen of the major salmon bearing tributaries, which enabled estimation of tributary-specific model parameters at those locations. Water temperature data from six mid-Klamath Basin tributaries were used to estimate a global set of parameters for predicting water temperatures in the remaining tributaries. The resulting parameter sets were used to simulate water temperatures for each of 75 tributaries from 1980-2015. Goodness-of-fit statistics computed from a cross-validation analysis demonstrated a high precision of the tributary-specific models in predicting temperature in unobserved years and of the global model in predicting temperatures in unobserved streams. Klamath River discharge has been monitored by four gages that broadly intersperse the 292 kilometers from the Iron Gate Dam to the Klamath River mouth. These gages defined the upstream and downstream margins of three reaches. Daily discharge of tributaries within a reach was estimated from 1980-2015 based on drainage-area proportionate allocations of the discharge differential between the upstream and downstream margin. Comparisons with measured discharge on Indian Creek, a moderate-sized tributary with naturally regulated flows, revealed that the estimates effectively approximated both the variability and magnitude of discharge.

  19. Prediction of the optimum hybridization conditions of dot-blot-SNP analysis using estimated melting temperature of oligonucleotide probes.

    Science.gov (United States)

    Shiokai, Sachiko; Kitashiba, Hiroyasu; Nishio, Takeshi

    2010-08-01

    Although the dot-blot-SNP technique is a simple cost-saving technique suitable for genotyping of many plant individuals, optimization of hybridization and washing conditions for each SNP marker requires much time and labor. For prediction of the optimum hybridization conditions for each probe, we compared T (m) values estimated from nucleotide sequences using the DINAMelt web server, measured T (m) values, and hybridization conditions yielding allele-specific signals. The estimated T (m) values were comparable to the measured T (m) values with small differences of less than 3 degrees C for most of the probes. There were differences of approximately 14 degrees C between the specific signal detection conditions and estimated T (m) values. Change of one level of SSC concentrations of 0.1, 0.2, 0.5, and 1.0x SSC corresponded to a difference of approximately 5 degrees C in optimum signal detection temperature. Increasing the sensitivity of signal detection by shortening the exposure time to X-ray film changed the optimum hybridization condition for specific signal detection. Addition of competitive oligonucleotides to the hybridization mixture increased the suitable hybridization conditions by 1.8. Based on these results, optimum hybridization conditions for newly produced dot-blot-SNP markers will become predictable.

  20. DEVELOPMENT OF ESTIMATION METHODS OF ORGANIZATIONAL AND TECHNOLOGICAL RELIABILITY LEVEL OF BUILDINGS AND STRUCTURES IN PROJECTS OF BIOPHER-SUPPORTING CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    CHERNYSHEV D. О.

    2017-03-01

    Full Text Available Summary. The article is devoted to the search of advanced analytical tools and methodical-algorithmic techniques of organizational and technological and stochastic evaluation, risks and threats overcoming during the implementation of biosphere construction projects. The application expediency of theory and methods of wavelet analysis in the study of non-stationary stochastic oscillations of complex spatial structures is substantiated due to the need for more accurate prediction of their dynamic behavior and identification of the structures characteristics in the frequency-time space.

  1. Prediction of hospital mortality by changes in the estimated glomerular filtration rate (eGFR).

    LENUS (Irish Health Repository)

    Berzan, E

    2015-03-01

    Deterioration of physiological or laboratory variables may provide important prognostic information. We have studied whether a change in estimated glomerular filtration rate (eGFR) value calculated using the (Modification of Diet in Renal Disease (MDRD) formula) over the hospital admission, would have predictive value. An analysis was performed on all emergency medical hospital episodes (N = 61964) admitted between 1 January 2002 and 31 December 2011. A stepwise logistic regression model examined the relationship between mortality and change in renal function from admission to discharge. The fully adjusted Odds Ratios (OR) for 5 classes of GFR deterioration showed a stepwise increased risk of 30-day death with OR\\'s of 1.42 (95% CI: 1.20, 1.68), 1.59 (1.27, 1.99), 2.71 (2.24, 3.27), 5.56 (4.54, 6.81) and 11.9 (9.0, 15.6) respectively. The change in eGFR during a clinical episode, following an emergency medical admission, powerfully predicts the outcome.

  2. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  3. Using Monte Carlo/Gaussian Based Small Area Estimates to Predict Where Medicaid Patients Reside.

    Science.gov (United States)

    Behrens, Jess J; Wen, Xuejin; Goel, Satyender; Zhou, Jing; Fu, Lina; Kho, Abel N

    2016-01-01

    Electronic Health Records (EHR) are rapidly becoming accepted as tools for planning and population health 1,2 . With the national dialogue around Medicaid expansion 12 , the role of EHR data has become even more important. For their potential to be fully realized and contribute to these discussions, techniques for creating accurate small area estimates is vital. As such, we examined the efficacy of developing small area estimates for Medicaid patients in two locations, Albuquerque and Chicago, by using a Monte Carlo/Gaussian technique that has worked in accurately locating registered voters in North Carolina 11 . The Albuquerque data, which includes patient address, will first be used to assess the accuracy of the methodology. Subsequently, it will be combined with the EHR data from Chicago to develop a regression that predicts Medicaid patients by US Block Group. We seek to create a tool that is effective in translating EHR data's potential for population health studies.

  4. Genetic Algorithms for Estimating Effective Parameters in a Lumped Reactor Model for Reactivity Predictions

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico

    2001-01-01

    The control system of a reactor should be able to predict, in real time, the amount of reactivity to be inserted (e.g., by control rod movements and boron injection and dilution) to respond to a given electrical load demand or to undesired, accidental transients. The real-time constraint renders impractical the use of a large, detailed dynamic reactor code. One has, then, to resort to simplified analytical models with lumped effective parameters suitably estimated from the reactor data.The simple and well-known Chernick model for describing the reactor power evolution in the presence of xenon is considered and the feasibility of using genetic algorithms for estimating the effective nuclear parameters involved and the initial nonmeasurable xenon and iodine conditions is investigated. This approach has the advantage of counterbalancing the inherent model simplicity with the periodic reestimation of the effective parameter values pertaining to each reactor on the basis of its recent history. By so doing, other effects, such as burnup, are automatically taken into account

  5. Estimating Human Predictability From Mobile Sensor Data

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Larsen, Jakob Eg; Jensen, Kristian

    2010-01-01

    Quantification of human behavior is of prime interest in many applications ranging from behavioral science to practical applications like GSM resource planning and context-aware services. As proxies for humans, we apply multiple mobile phone sensors all conveying information about human behavior....... Using a recent, information theoretic approach it is demonstrated that the trajectories of individual sensors are highly predictable given complete knowledge of the infinite past. We suggest using a new approach to time scale selection which demonstrates that participants have even higher predictability...

  6. A probabilistic model for US nuclear power construction times

    International Nuclear Information System (INIS)

    Shash, A.A.H.

    1988-01-01

    Construction time for nuclear power plants is an important element in planning for resources to meet future load demands. Analysis of actual versus estimated construction times for past US nuclear power plants indicates that utilities have continuously underestimated their power plants' construction durations. The analysis also indicates that the actual average construction time has been increasing upward, and the actual durations of power plants permitted to construct in the same year varied substantially. This study presents two probabilistic models for nuclear power construction time for use by the nuclear industry as estimating tool. The study also presents a detailed explanation of the factors that are responsible for increasing and varying nuclear power construction times. Observations on 91 complete nuclear units were involved in three interdependent analyses in the process of explanation and derivation of the probabilistic models. The historical data was first utilized in the data envelopment analysis (DEA) for the purpose of obtaining frontier index measures for project management achievement in building nuclear power plants

  7. Online prediction of respiratory motion: multidimensional processing with low-dimensional feature learning

    International Nuclear Information System (INIS)

    Ruan, Dan; Keall, Paul

    2010-01-01

    Accurate real-time prediction of respiratory motion is desirable for effective motion management in radiotherapy for lung tumor targets. Recently, nonparametric methods have been developed and their efficacy in predicting one-dimensional respiratory-type motion has been demonstrated. To exploit the correlation among various coordinates of the moving target, it is natural to extend the 1D method to multidimensional processing. However, the amount of learning data required for such extension grows exponentially with the dimensionality of the problem, a phenomenon known as the 'curse of dimensionality'. In this study, we investigate a multidimensional prediction scheme based on kernel density estimation (KDE) in an augmented covariate-response space. To alleviate the 'curse of dimensionality', we explore the intrinsic lower dimensional manifold structure and utilize principal component analysis (PCA) to construct a proper low-dimensional feature space, where kernel density estimation is feasible with the limited training data. Interestingly, the construction of this lower dimensional representation reveals a useful decomposition of the variations in respiratory motion into the contribution from semiperiodic dynamics and that from the random noise, as it is only sensible to perform prediction with respect to the former. The dimension reduction idea proposed in this work is closely related to feature extraction used in machine learning, particularly support vector machines. This work points out a pathway in processing high-dimensional data with limited training instances, and this principle applies well beyond the problem of target-coordinate-based respiratory-based prediction. A natural extension is prediction based on image intensity directly, which we will investigate in the continuation of this work. We used 159 lung target motion traces obtained with a Synchrony respiratory tracking system. Prediction performance of the low-dimensional feature learning

  8. Modeling and state-of-charge prediction of lithium-ion battery and ultracapacitor hybrids with a co-estimator

    International Nuclear Information System (INIS)

    Wang, Yujie; Liu, Chang; Pan, Rui; Chen, Zonghai

    2017-01-01

    The modeling and state-of-charge estimation of the batteries and ultracapacitors are crucial to the battery/ultracapacitor hybrid energy storage system. In recent years, the model based state estimators are welcomed widely, since they can adjust the gain according to the error between the model predictions and measurements timely. In most of the existing algorithms, the model parameters are either configured by theoretical values or identified off-line without adaption. But in fact, the model parameters always change continuously with loading wave or self-aging, and the lack of adaption will reduce the estimation accuracy significantly. To overcome this drawback, a novel co-estimator is proposed to estimate the model parameters and state-of-charge simultaneously. The extended Kalman filter is employed for parameter updating. To reduce the convergence time, the recursive least square algorithm and the off-line identification method are used to provide initial values with small deviation. The unscented Kalman filter is employed for the state-of-charge estimation. Because the unscented Kalman filter takes not only the measurement uncertainties but also the process uncertainties into account, it is robust to the noise. Experiments are executed to explore the robustness, stability and precision of the proposed method. - Highlights: • A co-estimator is proposed to estimate the model parameters and state-of-charge. • The extended Kalman filter is used for model parameter adaption. • The unscented Kalman filter is designed for state estimation with strong robust. • The dynamic profiles are employed to verify the proposed co-estimator.

  9. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  10. Using a detailed inventory of a large wastewater treatment plant to estimate the relative importance of construction to the overall environmental impacts.

    Science.gov (United States)

    Morera, Serni; Corominas, Lluís; Rigola, Miquel; Poch, Manel; Comas, Joaquim

    2017-10-01

    The aim of this work is to quantify the relative contribution to the overall environmental impact of the construction phase compared to the operational phase for a large conventional activated sludge wastewater treatment plant (WWTP). To estimate these environmental impacts, a systematic procedure was designed to obtain the detailed Life Cycle Inventories (LCI) for civil works and equipment, taking as starting point the construction project budget and the list of equipment installed at the Girona WWTP, which are the most reliable information sources of materials and resources used during the construction phase. A detailed inventory is conducted by including 45 materials for civil works and 1,240 devices for the equipment. For most of the impact categories and different life spans of the WWTP, the contribution of the construction phase to the overall burden is higher than 5% and, especially for metal depletion, the impact of construction reaches 63%. When comparing to the WWTP inventories available in Ecoinvent the share of construction obtained in this work is about 3 times smaller for climate change and twice higher for metal depletion. Concrete and reinforcing steel are the materials with the highest contribution to the civil works phase and motors, pumps and mobile and transport equipment are also key equipment to consider during life cycle inventories of WWTPs. Additional robust inventories for similar WWTP can leverage this work by applying the factors (kg of materials and energy per m 3 of treated water) and guidance provided. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Perspectives on Modelling BIM-enabled Estimating Practices

    Directory of Open Access Journals (Sweden)

    Willy Sher

    2014-12-01

    Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes.  It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management.  Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality.  Areas for future research are also identified in the paper.

  12. Effect of Trait Heritability, Training Population Size and Marker Density on Genomic Prediction Accuracy Estimation in 22 bi-parental Tropical Maize Populations.

    Science.gov (United States)

    Zhang, Ao; Wang, Hongwu; Beyene, Yoseph; Semagn, Kassa; Liu, Yubo; Cao, Shiliang; Cui, Zhenhai; Ruan, Yanye; Burgueño, Juan; San Vicente, Felix; Olsen, Michael; Prasanna, Boddupalli M; Crossa, José; Yu, Haiqiu; Zhang, Xuecai

    2017-01-01

    Genomic selection is being used increasingly in plant breeding to accelerate genetic gain per unit time. One of the most important applications of genomic selection in maize breeding is to predict and select the best un-phenotyped lines in bi-parental populations based on genomic estimated breeding values. In the present study, 22 bi-parental tropical maize populations genotyped with low density SNPs were used to evaluate the genomic prediction accuracy ( r MG ) of the six trait-environment combinations under various levels of training population size (TPS) and marker density (MD), and assess the effect of trait heritability ( h 2 ), TPS and MD on r MG estimation. Our results showed that: (1) moderate r MG values were obtained for different trait-environment combinations, when 50% of the total genotypes was used as training population and ~200 SNPs were used for prediction; (2) r MG increased with an increase in h 2 , TPS and MD, both correlation and variance analyses showed that h 2 is the most important factor and MD is the least important factor on r MG estimation for most of the trait-environment combinations; (3) predictions between pairwise half-sib populations showed that the r MG values for all the six trait-environment combinations were centered around zero, 49% predictions had r MG values above zero; (4) the trend observed in r MG differed with the trend observed in r MG / h , and h is the square root of heritability of the predicted trait, it indicated that both r MG and r MG / h values should be presented in GS study to show the accuracy of genomic selection and the relative accuracy of genomic selection compared with phenotypic selection, respectively. This study provides useful information to maize breeders to design genomic selection workflow in their breeding programs.

  13. Effect of Trait Heritability, Training Population Size and Marker Density on Genomic Prediction Accuracy Estimation in 22 bi-parental Tropical Maize Populations

    Directory of Open Access Journals (Sweden)

    Ao Zhang

    2017-11-01

    Full Text Available Genomic selection is being used increasingly in plant breeding to accelerate genetic gain per unit time. One of the most important applications of genomic selection in maize breeding is to predict and select the best un-phenotyped lines in bi-parental populations based on genomic estimated breeding values. In the present study, 22 bi-parental tropical maize populations genotyped with low density SNPs were used to evaluate the genomic prediction accuracy (rMG of the six trait-environment combinations under various levels of training population size (TPS and marker density (MD, and assess the effect of trait heritability (h2, TPS and MD on rMG estimation. Our results showed that: (1 moderate rMG values were obtained for different trait-environment combinations, when 50% of the total genotypes was used as training population and ~200 SNPs were used for prediction; (2 rMG increased with an increase in h2, TPS and MD, both correlation and variance analyses showed that h2 is the most important factor and MD is the least important factor on rMG estimation for most of the trait-environment combinations; (3 predictions between pairwise half-sib populations showed that the rMG values for all the six trait-environment combinations were centered around zero, 49% predictions had rMG values above zero; (4 the trend observed in rMG differed with the trend observed in rMG/h, and h is the square root of heritability of the predicted trait, it indicated that both rMG and rMG/h values should be presented in GS study to show the accuracy of genomic selection and the relative accuracy of genomic selection compared with phenotypic selection, respectively. This study provides useful information to maize breeders to design genomic selection workflow in their breeding programs.

  14. Exploring the motivation jungle: predicting performance on a novel task by investigating constructs from different motivation perspectives in tandem.

    Science.gov (United States)

    Van Nuland, Hanneke J C; Dusseldorp, Elise; Martens, Rob L; Boekaerts, Monique

    2010-08-01

    Different theoretical viewpoints on motivation make it hard to decide which model has the best potential to provide valid predictions on classroom performance. This study was designed to explore motivation constructs derived from different motivation perspectives that predict performance on a novel task best. Motivation constructs from self-determination theory, self-regulation theory, and achievement goal theory were investigated in tandem. Performance was measured by systematicity (i.e. how systematically students worked on a problem-solving task) and test score (i.e. score on a multiple-choice test). Hierarchical regression analyses on data from 259 secondary school students showed a quadratic relation between a performance avoidance orientation and both performance outcomes, indicating that extreme high and low performance avoidance resulted in the lowest performance. Furthermore, two three-way interaction effects were found. Intrinsic motivation seemed to play a key role in test score and systematicity performance, provided that effort regulation and metacognitive skills were both high. Results indicate that intrinsic motivation in itself is not enough to attain a good performance. Instead, a moderate score on performance avoidance, together with the ability to remain motivated and effectively regulate and control task behavior, is needed to attain a good performance. High time management skills also contributed to higher test score and systematicity performance and a low performance approach orientation contributed to higher systematicity performance. We concluded that self-regulatory skills should be trained in order to have intrinsically motivated students perform well on novel tasks in the classroom.

  15. The development of U. S. soil erosion prediction and modeling

    Directory of Open Access Journals (Sweden)

    John M. Laflen

    2013-09-01

    Full Text Available Soil erosion prediction technology began over 70 years ago when Austin Zingg published a relationship between soil erosion (by water and land slope and length, followed shortly by a relationship by Dwight Smith that expanded this equation to include conservation practices. But, it was nearly 20 years before this work's expansion resulted in the Universal Soil Loss Equation (USLE, perhaps the foremost achievement in soil erosion prediction in the last century. The USLE has increased in application and complexity, and its usefulness and limitations have led to the development of additional technologies and new science in soil erosion research and prediction. Main among these new technologies is the Water Erosion Prediction Project (WEPP model, which has helped to overcome many of the shortcomings of the USLE, and increased the scale over which erosion by water can be predicted. Areas of application of erosion prediction include almost all land types: urban, rural, cropland, forests, rangeland, and construction sites. Specialty applications of WEPP include prediction of radioactive material movement with soils at a superfund cleanup site, and near real-time daily estimation of soil erosion for the entire state of Iowa.

  16. Predicted Interval Plots (PIPS): A Graphical Tool for Data Monitoring of Clinical Trials.

    Science.gov (United States)

    Li, Lingling; Evans, Scott R; Uno, Hajime; Wei, L J

    2009-11-01

    Group sequential designs are often used in clinical trials to evaluate efficacy and/or futility. Many methods have been developed for different types of endpoints and scenarios. However, few of these methods convey information regarding effect sizes (e.g., treatment differences) and none uses prediction to convey information regarding potential effect size estimates and associated precision, with trial continuation. To address these limitations, Evans et al. (2007) proposed to use prediction and predicted intervals as a flexible and practical tool for quantitative monitoring of clinical trials. In this article, we reaffirm the importance and usefulness of this innovative approach and introduce a graphical summary, predicted interval plots (PIPS), to display the information obtained in the prediction process in a straightforward yet comprehensive manner. We outline the construction of PIPS and apply this method in two examples. The results and the interpretations of the PIPS are discussed.

  17. Passive solar construction handbook

    Energy Technology Data Exchange (ETDEWEB)

    Levy, E.; Evans, D.; Gardstein, C.

    1981-08-01

    Many of the basic elements of passive solar design are reviewed. The unique design constraints presented in passive homes are introduced and many of the salient issues influencing design decisions are described briefly. Passive solar construction is described for each passive system type: direct gain, thermal storage wall, attached sunspace, thermal storage roof, and convective loop. For each system type, important design and construction issues are discussed and case studies illustrating designed and built examples of the system type are presented. Construction details are given and construction and thermal performance information is given for the materials used in collector components, storage components, and control components. Included are glazing materials, framing systems, caulking and sealants, concrete masonry, concrete, brick, shading, reflectors, and insulators. The Load Collector Ratio method for estimating passive system performance is appended, and other analysis methods are briefly summarized. (LEW)

  18. Modified linear predictive coding approach for moving target tracking by Doppler radar

    Science.gov (United States)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  19. A new method for the construction of a mutant library with a predictable occurrence rate using Poisson distribution.

    Science.gov (United States)

    Seong, Ki Moon; Park, Hweon; Kim, Seong Jung; Ha, Hyo Nam; Lee, Jae Yung; Kim, Joon

    2007-06-01

    A yeast transcriptional activator, Gcn4p, induces the expression of genes that are involved in amino acid and purine biosynthetic pathways under amino acid starvation. Gcn4p has an acidic activation domain in the central region and a bZIP domain in the C-terminus that is divided into the DNA-binding motif and dimerization leucine zipper motif. In order to identify amino acids in the DNA-binding motif of Gcn4p which are involved in transcriptional activation, we constructed mutant libraries in the DNA-binding motif through an innovative application of random mutagenesis. Mutant library made by oligonucleotides which were mutated randomly using the Poisson distribution showed that the actual mutation frequency was in good agreement with expected values. This method could save the time and effort to create a mutant library with a predictable mutation frequency. Based on the studies using the mutant libraries constructed by the new method, the specific residues of the DNA-binding domain in Gcn4p appear to be involved in the transcriptional activities on a conserved binding site.

  20. Application of a Predictive Growth Model of Pseudomonas spp. for Estimating Shelf Life of Fresh Agaricus bisporus.

    Science.gov (United States)

    Wang, Jianming; Chen, Junran; Hu, Yunfeng; Hu, Hanyan; Liu, Guohua; Yan, Ruixiang

    2017-10-01

    For prediction of the shelf life of the mushroom Agaricus bisporus, the growth curve of the main spoilage microorganisms was studied under isothermal conditions at 2 to 22°C with a modified Gompertz model. The effect of temperature on the growth parameters for the main spoilage microorganisms was quantified and modeled using the square root model. Pseudomonas spp. were the main microorganisms causing A. bisporus decay, and the modified Gompertz model was useful for modelling the growth curve of Pseudomonas spp. All the bias factors values of the model were close to 1. By combining the modified Gompertz model with the square root model, a prediction model to estimate the shelf life of A. bisporus as a function of storage temperature was developed. The model was validated for A. bisporus stored at 6, 12, and 18°C, and adequate agreement was found between the experimental and predicted data.

  1. Morphometry Predicts Early GFR Change in Primary Proteinuric Glomerulopathies: A Longitudinal Cohort Study Using Generalized Estimating Equations.

    Directory of Open Access Journals (Sweden)

    Kevin V Lemley

    Full Text Available Most predictive models of kidney disease progression have not incorporated structural data. If structural variables have been used in models, they have generally been only semi-quantitative.We examined the predictive utility of quantitative structural parameters measured on the digital images of baseline kidney biopsies from the NEPTUNE study of primary proteinuric glomerulopathies. These variables were included in longitudinal statistical models predicting the change in estimated glomerular filtration rate (eGFR over up to 55 months of follow-up.The participants were fifty-six pediatric and adult subjects from the NEPTUNE longitudinal cohort study who had measurements made on their digital biopsy images; 25% were African-American, 70% were male and 39% were children; 25 had focal segmental glomerular sclerosis, 19 had minimal change disease, and 12 had membranous nephropathy. We considered four different sets of candidate predictors, each including four quantitative structural variables (for example, mean glomerular tuft area, cortical density of patent glomeruli and two of the principal components from the correlation matrix of six fractional cortical areas-interstitium, atrophic tubule, intact tubule, blood vessel, sclerotic glomerulus, and patent glomerulus along with 13 potentially confounding demographic and clinical variables (such as race, age, diagnosis, and baseline eGFR, quantitative proteinuria and BMI. We used longitudinal linear models based on these 17 variables to predict the change in eGFR over up to 55 months. All 4 models had a leave-one-out cross-validated R2 of about 62%.Several combinations of quantitative structural variables were significantly and strongly associated with changes in eGFR. The structural variables were generally stronger than any of the confounding variables, other than baseline eGFR. Our findings suggest that quantitative assessment of diagnostic renal biopsies may play a role in estimating the baseline

  2. Power flow prediction in vibrating systems via model reduction

    Science.gov (United States)

    Li, Xianhui

    This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.

  3. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  4. Estimation of erosion-accumulative processes at the Inia River’s mouth near high-rise construction zones.

    Directory of Open Access Journals (Sweden)

    Sineeva Natalya

    2018-01-01

    Full Text Available Our study relevance is due to the increasing man-made impact on water bodies and associated land resources within the urban areas, as a consequence, by a change in the morphology and dynamics of Rivers’ canals. This leads to the need to predict the development of erosion-accumulation processes, especially within the built-up urban areas. Purpose of the study is to develop programs on the assessment of erosion-accumulation processes at a water body, a mouth area of the Inia River, in the of perspective high-rise construction zone of a residential microdistrict, the place, where floodplain-channel complex is intensively expected to develop. Results of the study: Within the velocities of the water flow comparing, full-scale measured conditions, and calculated from the model, a slight discrepancy was recorded. This allows us to say that the numerical model reliably describes the physical processes developing in the River. The carried out calculations to assess the direction and intensity of the channel re-formations, made us possible to conclude, there was an insignificant predominance of erosion processes over the accumulative ones on the undeveloped part of the Inia River (the processes activity is noticeable only in certain areas (by the coasts and the island. Importance of the study: The study on the erosion-accumulation processes evaluation can be used in design decisions for the future high-rise construction of this territory, which will increase their economic efficiency.

  5. TBM performance prediction in Yucca Mountain welded tuff from linear cutter tests

    International Nuclear Information System (INIS)

    Gertsch, R.; Ozdemir, L.; Gertsch, L.

    1992-01-01

    This paper discusses performance prediction which were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah String welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. The result show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The result also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength

  6. Predicting the risk of suicide by analyzing the text of clinical notes.

    Science.gov (United States)

    Poulin, Chris; Shiner, Brian; Thompson, Paul; Vepstas, Linas; Young-Xu, Yinong; Goertzel, Benjamin; Watts, Bradley; Flashman, Laura; McAllister, Thomas

    2014-01-01

    We developed linguistics-driven prediction models to estimate the risk of suicide. These models were generated from unstructured clinical notes taken from a national sample of U.S. Veterans Administration (VA) medical records. We created three matched cohorts: veterans who committed suicide, veterans who used mental health services and did not commit suicide, and veterans who did not use mental health services and did not commit suicide during the observation period (n = 70 in each group). From the clinical notes, we generated datasets of single keywords and multi-word phrases, and constructed prediction models using a machine-learning algorithm based on a genetic programming framework. The resulting inference accuracy was consistently 65% or more. Our data therefore suggests that computerized text analytics can be applied to unstructured medical records to estimate the risk of suicide. The resulting system could allow clinicians to potentially screen seemingly healthy patients at the primary care level, and to continuously evaluate the suicide risk among psychiatric patients.

  7. Predicting the Risk of Suicide by Analyzing the Text of Clinical Notes

    Science.gov (United States)

    Thompson, Paul; Vepstas, Linas; Young-Xu, Yinong; Goertzel, Benjamin; Watts, Bradley; Flashman, Laura; McAllister, Thomas

    2014-01-01

    We developed linguistics-driven prediction models to estimate the risk of suicide. These models were generated from unstructured clinical notes taken from a national sample of U.S. Veterans Administration (VA) medical records. We created three matched cohorts: veterans who committed suicide, veterans who used mental health services and did not commit suicide, and veterans who did not use mental health services and did not commit suicide during the observation period (n = 70 in each group). From the clinical notes, we generated datasets of single keywords and multi-word phrases, and constructed prediction models using a machine-learning algorithm based on a genetic programming framework. The resulting inference accuracy was consistently 65% or more. Our data therefore suggests that computerized text analytics can be applied to unstructured medical records to estimate the risk of suicide. The resulting system could allow clinicians to potentially screen seemingly healthy patients at the primary care level, and to continuously evaluate the suicide risk among psychiatric patients. PMID:24489669

  8. Construction Management: Planning Ahead.

    Science.gov (United States)

    Arsht, Steven

    2003-01-01

    Explains that preconstruction planning is essential when undertaking the challenges of a school building renovation or expansion, focusing on developing a detailed estimate, creating an effective construction strategy, conducting reviews and value-engineering workshops, and realizing savings through effective risk analysis and contingency…

  9. Experience in using a multilevel model of the boundary layer for estimating changes in microclimatic characteristics in the region of construction of the Adychan and middle Enisei hydroelectric stations

    International Nuclear Information System (INIS)

    Kondratyuk, T.P.; Shklyarevich, O.B.

    1993-01-01

    The results of estimating the impact of artificial water bodies in regions of construction of hydroelectric stations on the micro- and mesoclimatic characteristics of the surrounding territory are given

  10. BIM – New rules of measurement ontology for construction cost estimation

    Directory of Open Access Journals (Sweden)

    F.H. Abanda

    2017-04-01

    Full Text Available For generations, the process of cost estimation has been manual, time-consuming and error-prone. Emerging Building Information Modelling (BIM can exploit standard measurement methods to automate cost estimation process and improve inaccuracies. Structuring standard measurement methods in an ontologically and machine readable format for a BIM software can greatly facilitate the process of improving inaccuracies in cost estimation. This study explores the development of an ontology based on New Rules of Measurement (NRM for cost estimation during the tendering stages. The methodology adopted is methontology, one of the most widely used ontology engineering methodologies. To ensure the ontology is fit for purpose, cost estimation experts are employed to check the semantics, descriptive logic-based reasoners are used to syntactically check the ontology and a leading 4D BIM modelling software is used on a case study building to test/validate the proposed ontology.

  11. Examination of the site amplification factor of OBS and their application to magnitude estimation and ground-motion prediction for EEW

    Science.gov (United States)

    Hayashimoto, N.; Hoshiba, M.

    2013-12-01

    1. Introduction Ocean bottom seismograph (OBS) is useful for making Earthquake Early Warning (EEW) earlier. However, careful handling of these data is required because the installation environment of OBSs may be different from that of land stations. Site amplification factor is an important factor to estimate the magnitudes, and to predict ground motions (e.g. seismic intensity) in EEW. In this presentation, we discuss the site amplification factor of OBS in the Tonankai area of Japan from these two points of view. 2. Examination of magnitude correction of OBS In the EEW of JMA, the magnitude is estimated from the maximum amplitude of the displacement in real time. To provide the fast magnitude estimation, the magnitude-estimation algorithm switches from the P to S formula (Meew(P) to Meew(S)) depending on the expected S-phase arrival (Kamigaichi,2004). To estimate the magnitude correction for OBS, we determine Meew(P) and Meew(S) at OBSs and compare them with JMA magnitude (Mjma). We find Meew(S) at OBS is generally larger than Mjma by approximately 0.6. The slight differences of spatial distribution of Meew(S) amplification are also found among other OBSs. From the numerical simulations, Nakamura et al. (MGR,submitted) pointed out that the oceanic layer and the low-velocity sediment layers causes the large amplifications in low frequency range (0.1-0.2Hz) at OBSs. We conclude that the site effect of OBS characterized by such a low velocity sediment layers causes those amplification of Magnitude. 3. The frequency-dependent site factor of OBS estimated from Fourier spectrum ratio and their application for prediction of seismic intensity of land station We compare Fourier spectra of S-wave portion on OBSs with those on adjacent land stations. Station pair whose distance is smaller than 50 km is analyzed, and we obtain that spectral ratio of land station (MIEH05 of the KiK-net/NIED) to OBS (KMA01 of the DONET/JAMSTEC) is 5-20 for frequencies 10-20Hz for both

  12. Predicting fundamental and realized distributions based on thermal niche: A case study of a freshwater turtle

    Science.gov (United States)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.

    2018-04-01

    Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.

  13. Predictive models of biomass for poplar and willow. Short rotation coppice in the United Kingdom

    Energy Technology Data Exchange (ETDEWEB)

    Brewer, A.C.; Morgan, G.W.; Poole, E.J.; Baldwin, M.E.; Tubby, I. (Biometrics, Surveys and Statistics Division, Forest Research, Farnham (United Kingdom))

    2007-07-01

    A series of forty-nine experimental trials on short rotation coppice (SRC) were conducted throughout the United Kingdom using a selection of varieties of poplar and willow with the aim of evaluating their performance for wood fuel production under a representative range of UK conditions. Observations on the crops and on a range of site and climatic conditions during the growth of the crops were taken over two three-year cutting cycles. These observations were used to develop a suite of empirical models for poplar and willow SRC growth and yield from which systems were constructed to provide a- priori predictions of biomass yield for any site in the UK with known characteristics (predictive yield models), and estimates of biomass yield from a standing crop (standing biomass models). The structure of the series of field trials and the consequent approach and methodology used in the construction of the suite of empirical models are described, and their use in predicting biomass yields of poplar and willow SRC is discussed. (orig.)

  14. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance

    DEFF Research Database (Denmark)

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L

    2017-01-01

    OBJECTIVE: To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. STUDY DESIGN AND SETTING: An analysis of three pre-existing sets of large cohort data......, odds ratios and risk/prevalence ratios, for each sample size was calculated. RESULTS: There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same dataset when calculated in sample sizes below 400 people, and typically this variability...... stabilized in samples of 400 to 600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. CONCLUSION: To reduce sample-specific variability, contingency tables...

  15. Construction time of PWRs

    International Nuclear Information System (INIS)

    Moreira, João M.L.; Gallinaro, Bruno; Carajilescov, Pedro

    2013-01-01

    The construction time of PWRs is studied considering published data about nuclear power plants in the world. For the 268 PWRs in operation in 2010, the mode of the construction time distribution is around 5–6 years, and 80% of the plants were built in less than 120 months. To circumvent the problem of comparing plants with different size we normalized the construction time to plants with 1 GW. We restricted the analysis to 201 PWRs which suffered less from external factors that were beyond the control of the management from 1965 to 2010. The results showed that the normalized construction time did not increase over the years and nor with the plants’ gross power level. The learning rate of the industry regarding normalized construction times showed a reduction with 95% confidence level of about 0.56±0.07 months for each 10 GW of installed capacity. Over the years the normalized construction time decreased and became more predictable. The data showed that countries with more centralized regulatory, construction and operation environments were able to build PWRs in shorter times. Countries less experienced with the nuclear technology built PWRs in longer times. - Highlights: ► The construction time of PWRs is analyzed based on historical data. ► Different factors affecting construction time are considered in the analyses. ► The normalized construction time of PWRs decreased with time and gross power level. ► Countries with more centralized institutions built PWRs more quickly

  16. 36 CFR 223.62 - Timber purchaser road construction credit.

    Science.gov (United States)

    2010-07-01

    ... § 223.62 Timber purchaser road construction credit. Appraisal may also establish stumpage value as if... timber is appraised and sold on such basis, purchaser credit for road construction, not to exceed the estimated construction cost of such roads or other developments specified in the timber sale contract, shall...

  17. Prediction of bull fertility.

    Science.gov (United States)

    Utt, Matthew D

    2016-06-01

    Prediction of male fertility is an often sought-after endeavor for many species of domestic animals. This review will primarily focus on providing some examples of dependent and independent variables to stimulate thought about the approach and methodology of identifying the most appropriate of those variables to predict bull (bovine) fertility. Although the list of variables will continue to grow with advancements in science, the principles behind making predictions will likely not change significantly. The basic principle of prediction requires identifying a dependent variable that is an estimate of fertility and an independent variable or variables that may be useful in predicting the fertility estimate. Fertility estimates vary in which parts of the process leading to conception that they infer about and the amount of variation that influences the estimate and the uncertainty thereof. The list of potential independent variables can be divided into competence of sperm based on their performance in bioassays or direct measurement of sperm attributes. A good prediction will use a sample population of bulls that is representative of the population to which an inference will be made. Both dependent and independent variables should have a dynamic range in their values. Careful selection of independent variables includes reasonable measurement repeatability and minimal correlation among variables. Proper estimation and having an appreciation of the degree of uncertainty of dependent and independent variables are crucial for using predictions to make decisions regarding bull fertility. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Copula-based prediction of economic movements

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Hirsh, I. D.

    2016-06-01

    In this paper we model the discretized returns of two paired time series BM&FBOVESPA Dividend Index and BM&FBOVESPA Public Utilities Index using multivariate Markov models. The discretization corresponds to three categories, high losses, high profits and the complementary periods of the series. In technical terms, the maximal memory that can be considered for a Markov model, can be derived from the size of the alphabet and dataset. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination, of the partitions corresponding to the two marginal processes and the partition corresponding to the multivariate Markov chain. In order to estimate the transition probabilities, all the partitions are linked using a copula. In our application this strategy provides a significant improvement in the movement predictions.

  19. Predicting waist circumference from body mass index.

    Science.gov (United States)

    Bozeman, Samuel R; Hoaglin, David C; Burton, Tanya M; Pashos, Chris L; Ben-Joseph, Rami H; Hollenbeak, Christopher S

    2012-08-03

    Being overweight or obese increases risk for cardiometabolic disorders. Although both body mass index (BMI) and waist circumference (WC) measure the level of overweight and obesity, WC may be more important because of its closer relationship to total body fat. Because WC is typically not assessed in clinical practice, this study sought to develop and verify a model to predict WC from BMI and demographic data, and to use the predicted WC to assess cardiometabolic risk. Data were obtained from the Third National Health and Nutrition Examination Survey (NHANES) and the Atherosclerosis Risk in Communities Study (ARIC). We developed linear regression models for men and women using NHANES data, fitting waist circumference as a function of BMI. For validation, those regressions were applied to ARIC data, assigning a predicted WC to each individual. We used the predicted WC to assess abdominal obesity and cardiometabolic risk. The model correctly classified 88.4% of NHANES subjects with respect to abdominal obesity. Median differences between actual and predicted WC were -0.07 cm for men and 0.11 cm for women. In ARIC, the model closely estimated the observed WC (median difference: -0.34 cm for men, +3.94 cm for women), correctly classifying 86.1% of ARIC subjects with respect to abdominal obesity and 91.5% to 99.5% as to cardiometabolic risk.The model is generalizable to Caucasian and African-American adult populations because it was constructed from data on a large, population-based sample of men and women in the United States, and then validated in a population with a larger representation of African-Americans. The model accurately estimates WC and identifies cardiometabolic risk. It should be useful for health care practitioners and public health officials who wish to identify individuals and populations at risk for cardiometabolic disease when WC data are unavailable.

  20. Construction and demolition waste indicators.

    Science.gov (United States)

    Mália, Miguel; de Brito, Jorge; Pinheiro, Manuel Duarte; Bravo, Miguel

    2013-03-01

    The construction industry is one of the biggest and most active sectors of the European Union (EU), consuming more raw materials and energy than any other economic activity. Furthermore, construction waste is the commonest waste produced in the EU. Current EU legislation sets out to implement construction and demolition waste (CDW) prevention and recycling measures. However it lacks tools to accelerate the development of a sector as bound by tradition as the building industry. The main objective of the present study was to determine indicators to estimate the amount of CDW generated on site both globally and by waste stream. CDW generation was estimated for six specific sectors: new residential construction, new non-residential construction, residential demolition, non-residential demolition, residential refurbishment, and non-residential refurbishment. The data needed to develop the indicators was collected through an exhaustive survey of previous international studies. The indicators determined suggest that the average composition of waste generated on site is mostly concrete and ceramic materials. Specifically for new residential and new non-residential construction the production of concrete waste in buildings with a reinforced concrete structure lies between 17.8 and 32.9 kg m(-2) and between 18.3 and 40.1 kg m(-2), respectively. For the residential and non-residential demolition sectors the production of this waste stream in buildings with a reinforced concrete structure varies from 492 to 840 kg m(-2) and from 401 to 768 kg/m(-2), respectively. For the residential and non-residential refurbishment sectors the production of concrete waste in buildings lies between 18.9 and 45.9 kg/m(-2) and between 18.9 and 191.2 kg/m(-2), respectively.

  1. Mass estimation of loose parts in nuclear power plant based on multiple regression

    International Nuclear Information System (INIS)

    He, Yuanfeng; Cao, Yanlong; Yang, Jiangxin; Gan, Chunbiao

    2012-01-01

    According to the application of the Hilbert–Huang transform to the non-stationary signal and the relation between the mass of loose parts in nuclear power plant and corresponding frequency content, a new method for loose part mass estimation based on the marginal Hilbert–Huang spectrum (MHS) and multiple regression is proposed in this paper. The frequency spectrum of a loose part in a nuclear power plant can be expressed by the MHS. The multiple regression model that is constructed by the MHS feature of the impact signals for mass estimation is used to predict the unknown masses of a loose part. A simulated experiment verified that the method is feasible and the errors of the results are acceptable. (paper)

  2. Estimated carotid-femoral pulse wave velocity has similar predictive value as measured carotid-femoral pulse wave velocity

    DEFF Research Database (Denmark)

    Greve, Sara V; Blicher, Marie K; Kruger, Ruan

    2016-01-01

    BACKGROUND: Carotid-femoral pulse wave velocity (cfPWV) adds significantly to traditional cardiovascular risk prediction, but is not widely available. Therefore, it would be helpful if cfPWV could be replaced by an estimated carotid-femoral pulse wave velocity (ePWV) using age and mean blood pres...... that these traditional risk scores have underestimated the complicated impact of age and blood pressure on arterial stiffness and cardiovascular risk....

  3. The Role of a PMI-Prediction Model in Evaluating Forensic Entomology Experimental Design, the Importance of Covariates, and the Utility of Response Variables for Estimating Time Since Death

    Directory of Open Access Journals (Sweden)

    Jeffrey Wells

    2017-05-01

    Full Text Available The most common forensic entomological application is the estimation of some portion of the time since death, or postmortem interval (PMI. To our knowledge, a PMI estimate is almost never accompanied by an associated probability. Statistical methods are now available for calculating confidence limits for an insect-based prediction of PMI for both succession and development data. In addition to it now being possible to employ these approaches in validation experiments and casework, it is also now possible to use the criterion of prediction performance to guide training experiments, i.e., to modify carrion insect development or succession experiment design in ways likely to improve the performance of PMI predictions using the resulting data. In this paper, we provide examples, derived from our research program on calculating PMI estimate probabilities, of how training data experiment design can influence the performance of a statistical model for PMI prediction.

  4. 48 CFR 36.214 - Special procedures for price negotiation in construction contracting.

    Science.gov (United States)

    2010-10-01

    ... price negotiation in construction contracting. 36.214 Section 36.214 Federal Acquisition Regulations... negotiation in construction contracting. (a) Agencies shall follow the policies and procedures in part 15 when... scope of the work. If negotiations reveal errors in the Government estimate, the estimate shall be...

  5. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  6. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    Science.gov (United States)

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  7. 36 CFR 223.41 - Payment when purchaser elects government road construction.

    Science.gov (United States)

    2010-07-01

    ... government road construction. 223.41 Section 223.41 Parks, Forests, and Public Property FOREST SERVICE... Conditions and Provisions § 223.41 Payment when purchaser elects government road construction. Each contract having a provision for construction of specified roads with total estimated construction costs of $50,000...

  8. Comparison of Ultrasound Attenuation and Backscatter Estimates in Layered Tissue-Mimicking Phantoms among Three Clinical Scanners

    Science.gov (United States)

    Nam, Kibo; Rosado-Mendez, Ivan M.; Wirtzfeld, Lauren A.; Ghoshal, Goutam; Pawlicki, Alexander D.; Madsen, Ernest L.; Lavarello, Roberto J.; Oelze, Michael L.; Zagzebski, James A.; O’Brien, William D.; Hall, Timothy J.

    2013-01-01

    Backscatter and attenuation coefficient estimates are needed in many quantitative ultrasound strategies. In clinical applications, these parameters may not be easily obtained because of variations in scattering by tissues overlying a region of interest (ROI). The goal of this study is to assess the accuracy of backscatter and attenuation estimates for regions distal to nonuniform layers of tissue-mimicking materials. In addition, this work compares results of these estimates for “layered” phantoms scanned using different clinical ultrasound machines. Two tissue-mimicking phantoms were constructed, each exhibiting depth-dependent variations in attenuation or backscatter. The phantoms were scanned with three ultrasound imaging systems, acquiring radio frequency echo data for offline analysis. The attenuation coefficient and the backscatter coefficient (BSC) for sections of the phantoms were estimated using the reference phantom method. Properties of each layer were also measured with laboratory techniques on test samples manufactured during the construction of the phantom. Estimates of the attenuation coefficient versus frequency slope, α0, using backscatter data from the different systems agreed to within 0.24 dB/cm-MHz. Bias in the α0 estimates varied with the location of the ROI. BSC estimates for phantom sections whose locations ranged from 0 to 7 cm from the transducer agreed among the different systems and with theoretical predictions, with a mean bias error of 1.01 dB over the used bandwidths. This study demonstrates that attenuation and BSCs can be accurately estimated in layered inhomogeneous media using pulse-echo data from clinical imaging systems. PMID:23160474

  9. Handbook of energy use for building construction

    Energy Technology Data Exchange (ETDEWEB)

    Stein, R.G.; Stein, C.; Buckley, M.; Green, M.

    1980-03-01

    The construction industry accounts for over 11.14% of the total energy consumed in the US annually. This represents the equivalent energy value of 1 1/4 billion barrels of oil. Within the construction industry, new building construction accounts for 5.19% of national annual energy consumption. The remaining 5.95% is distributed among new nonbuilding construction (highways, ralroads, dams, bridges, etc.), building maintenance construction, and nonbuilding maintenance construction. The handbook focuses on new building construction; however, some information for the other parts of the construction industry is also included. The handbook provides building designers with information to determine the energy required for buildings construction and evaluates the energy required for alternative materials, assemblies, and methods. The handbook is also applicable to large-scale planning and policy determination in that it provides the means to estimate the energy required to carry out major building programs.

  10. Handbook of energy use for building construction

    Science.gov (United States)

    Stein, R. G.; Stein, C.; Buckley, M.; Green, M.

    1980-03-01

    The construction industry accounts for over 11.14% of the total energy consumed in the US annually. This represents the equivalent energy value of 1 1/4 billion barrels of oil. Within the construction industry, new building construction accounts for 5.19% of national annual energy consumption. The remaining 5.95% is distributed among new nonbuilding construction (highways, railroads, dams, bridges, etc.), building maintenance construction, and nonbuilding maintenance construction. Emphasis is given to new building construction; however, some information for the other parts of the construction industry is also included. Building designers are provided with information to determine the energy required for buildings construction and to evaluate the energy required for alternative materials, assemblies, and methods. It is also applicable to large-scale planning and policy determination in that it provides the means to estimate the energy required to carry out major building programs.

  11. Accuracy of Igenity genomically estimated breeding values for predicting Australian Angus BREEDPLAN traits.

    Science.gov (United States)

    Boerner, V; Johnston, D; Wu, X-L; Bauck, S

    2015-02-01

    Genomically estimated breeding values (GEBV) for Angus beef cattle are available from at least 2 commercial suppliers (Igenity [http://www.igenity.com] and Zoetis [http://www.zoetis.com]). The utility of these GEBV for improving genetic evaluation depends on their accuracies, which can be estimated by the genetic correlation with phenotypic target traits. Genomically estimated breeding values of 1,032 Angus bulls calculated from prediction equations (PE) derived by 2 different procedures in the U.S. Angus population were supplied by Igenity. Both procedures were based on Illuminia BovineSNP50 BeadChip genotypes. In procedure sg, GEBV were calculated from PE that used subsets of only 392 SNP, where these subsets were individually selected for each trait by BayesCπ. In procedure rg GEBV were calculated from PE derived in a ridge regression approach using all available SNP. Because the total set of 1,032 bulls with GEBV contained 732 individuals used in the Igenity training population, GEBV subsets were formed characterized by a decreasing average relationship between individuals in the subsets and individuals in the training population. Accuracies of GEBV were estimated as genetic correlations between GEBV and their phenotypic target traits modeling GEBV as trait observations in a bivariate REML approach, in which phenotypic observations were those recorded in the commercial Australian Angus seed stock sector. Using results from the GEBV subset excluding all training individuals as a reference, estimated accuracies were generally in agreement with those already published, with both types of GEBV (sg and rg) yielding similar results. Accuracies for growth traits ranged from 0.29 to 0.45, for reproductive traits from 0.11 to 0.53, and for carcass traits from 0.3 to 0.75. Accuracies generally decreased with an increasing genetic distance between the training and the validation population. However, for some carcass traits characterized by a low number of phenotypic

  12. Predictive event modelling in multicenter clinical trials with waiting time to response.

    Science.gov (United States)

    Anisimov, Vladimir V

    2011-01-01

    A new analytic statistical technique for predictive event modeling in ongoing multicenter clinical trials with waiting time to response is developed. It allows for the predictive mean and predictive bounds for the number of events to be constructed over time, accounting for the newly recruited patients and patients already at risk in the trial, and for different recruitment scenarios. For modeling patient recruitment, an advanced Poisson-gamma model is used, which accounts for the variation in recruitment over time, the variation in recruitment rates between different centers and the opening or closing of some centers in the future. A few models for event appearance allowing for 'recurrence', 'death' and 'lost-to-follow-up' events and using finite Markov chains in continuous time are considered. To predict the number of future events over time for an ongoing trial at some interim time, the parameters of the recruitment and event models are estimated using current data and then the predictive recruitment rates in each center are adjusted using individual data and Bayesian re-estimation. For a typical scenario (continue to recruit during some time interval, then stop recruitment and wait until a particular number of events happens), the closed-form expressions for the predictive mean and predictive bounds of the number of events at any future time point are derived under the assumptions of Markovian behavior of the event progression. The technique is efficiently applied to modeling different scenarios for some ongoing oncology trials. Case studies are considered. Copyright © 2011 John Wiley & Sons, Ltd.

  13. A robust method for estimating motorbike count based on visual information learning

    Science.gov (United States)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  14. A Real-Time Joint Estimator for Model Parameters and State of Charge of Lithium-Ion Batteries in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jianping Gao

    2015-08-01

    Full Text Available Accurate state of charge (SoC estimation of batteries plays an important role in promoting the commercialization of electric vehicles. The main work to be done in accurately determining battery SoC can be summarized in three parts. (1 In view of the model-based SoC estimation flow diagram, the n-order resistance-capacitance (RC battery model is proposed and expected to accurately simulate the battery’s major time-variable, nonlinear characteristics. Then, the mathematical equations for model parameter identification and SoC estimation of this model are constructed. (2 The Akaike information criterion is used to determine an optimal tradeoff between battery model complexity and prediction precision for the n-order RC battery model. Results from a comparative analysis show that the first-order RC battery model is thought to be the best based on the Akaike information criterion (AIC values. (3 The real-time joint estimator for the model parameter and SoC is constructed, and the application based on two battery types indicates that the proposed SoC estimator is a closed-loop identification system where the model parameter identification and SoC estimation are corrected mutually, adaptively and simultaneously according to the observer values. The maximum SoC estimation error is less than 1% for both battery types, even against the inaccurate initial SoC.

  15. New unitary affine-Virasoro constructions

    International Nuclear Information System (INIS)

    Halpern, M.B.; Kiritsis, E.; Obers, N.A.; Poratti, M.; Yamron, J.P.

    1990-01-01

    This paper reports on a quasi-systematic investigation of the Virasoro master equation. The space of all affine-Virasoro constructions is organized by K-conjugation into affine-Virasoro nests, and an estimate of the dimension of the space shows that most solutions await discovery. With consistent ansatze for the master equation, large classes of new unitary nests are constructed, including quadratic deformation nests with continuous conformal weights, and unitary irrational central charge nests, which may dominate unitary rational central charge on compact g

  16. Cost estimates for nuclear power in the UK

    International Nuclear Information System (INIS)

    Harris, Grant; Heptonstall, Phil; Gross, Robert; Handley, David

    2013-01-01

    Current UK Government support for nuclear power has in part been informed by cost estimates that suggest that electricity from new nuclear power stations will be competitive with alternative low carbon generation options. The evidence and analysis presented in this paper suggests that the capital cost estimates for nuclear power that are being used to inform these projections rely on costs escalating over the pre-construction and construction phase of the new build programme at a level significantly below those that have been experienced by past US and European programmes. This paper applies observed construction time and cost escalation rates to the published estimates of capital costs for new nuclear plant in the UK and calculates the potential impact on levelised cost per unit of electricity produced. The results suggest that levelised cost may turn out to be significantly higher than expected which in turn has important implications for policy, both in general terms of the potential costs to consumers and more specifically for negotiations around the level of policy support and contractual arrangements offered to individual projects through the proposed contract for difference strike price. -- Highlights: •Nuclear power projects costs can rise substantially during the construction period. •Pre-construction and construction time can be much longer than anticipated. •Adjusting estimates for observed experience increases levelised costs significantly. •Higher costs suggest that more policy support than envisaged may be required

  17. Estimating large carnivore populations at global scale based on spatial predictions of density and distribution – Application to the jaguar (Panthera onca)

    Science.gov (United States)

    Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard

    2018-01-01

    Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (conservation actions. PMID:29579129

  18. An improved method for predicting the evolution of the characteristic parameters of an information system

    Science.gov (United States)

    Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.

    2018-03-01

    The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.

  19. Construction Cost Growth for New Department of Energy Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kubic, Jr., William L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-05-25

    Cost growth and construction delays are problems that plague many large construction projects including the construction of new Department of Energy (DOE) nuclear facilities. A study was conducted to evaluate cost growth of large DOE construction projects. The purpose of the study was to compile relevant data, consider the possible causes of cost growth, and recommend measures that could be used to avoid extreme cost growth in the future. Both large DOE and non-DOE construction projects were considered in this study. With the exception of Chemical and Metallurgical Research Building Replacement Project (CMRR) and the Mixed Oxide Fuel Fabrication Facility (MFFF), cost growth for DOE Nuclear facilities is comparable to the growth experienced in other mega construction projects. The largest increase in estimated cost was found to occur between early cost estimates and establishing the project baseline during detailed design. Once the project baseline was established, cost growth for DOE nuclear facilities was modest compared to non-DOE mega projects.

  20. The supply chain of civil construction industries for support the nuclear power plant construction in Indonesia

    International Nuclear Information System (INIS)

    Dharu Dewi; Sriyana; Moch-Djoko Birmano; Sahala Lumbanraja; Nurlaila

    2013-01-01

    The use of domestic products for electricity infrastructure has been set out in the Ministerial Decree number: 54/M-IND/PER/3/2012, but the infrastructure of nuclear power plants (NPP) construction has not been included. Therefore, the potential of the local industries needs to be mapped it especially supply chain of civil construction industries to estimate the capability of the local component level (DCL) at the nuclear power plant project in Indonesia. NPP is a high-technology so that if NPP will be constructed, it is necessary to involve the national capability as media technology transfer, especially for EPC (Engineering, Procurement and Construction) services. Civil construction (civil part) play role is very large, about 21%. Therefore it is necessary in particular the role of the national civil construction industry to increase the capability of local content. Preparation of Civil construction infrastructure are depend on the supply chain of raw materials. The aim of the research was to map the supply chain of the civil construction industries. Methodology this study is a survey of national industries, literature review, and searching web site. The result study is a map of civil construction industries with raw material supply chain. (author)

  1. Prediction intervals for future BMI values of individual children - a non-parametric approach by quantile boosting

    Directory of Open Access Journals (Sweden)

    Mayr Andreas

    2012-01-01

    Full Text Available Abstract Background The construction of prediction intervals (PIs for future body mass index (BMI values of individual children based on a recent German birth cohort study with n = 2007 children is problematic for standard parametric approaches, as the BMI distribution in childhood is typically skewed depending on age. Methods We avoid distributional assumptions by directly modelling the borders of PIs by additive quantile regression, estimated by boosting. We point out the concept of conditional coverage to prove the accuracy of PIs. As conditional coverage can hardly be evaluated in practical applications, we conduct a simulation study before fitting child- and covariate-specific PIs for future BMI values and BMI patterns for the present data. Results The results of our simulation study suggest that PIs fitted by quantile boosting cover future observations with the predefined coverage probability and outperform the benchmark approach. For the prediction of future BMI values, quantile boosting automatically selects informative covariates and adapts to the age-specific skewness of the BMI distribution. The lengths of the estimated PIs are child-specific and increase, as expected, with the age of the child. Conclusions Quantile boosting is a promising approach to construct PIs with correct conditional coverage in a non-parametric way. It is in particular suitable for the prediction of BMI patterns depending on covariates, since it provides an interpretable predictor structure, inherent variable selection properties and can even account for longitudinal data structures.

  2. Shallow groundwater intrusion to deeper depths caused by construction and drainage of a large underground facility. Estimation using 3H, CFCs and SF6 as trace materials

    International Nuclear Information System (INIS)

    Hagiwara, Hiroki; Iwatsuki, Teruki; Hasegawa, Takuma; Nakata, Kotaro; Tomioka, Yuichi

    2015-01-01

    This study evaluates a method to estimate shallow groundwater intrusion in and around a large underground research facility (Mizunami Underground Research Laboratory-MIU). Water chemistry, stable isotopes (δD and δ 18 O), tritium ( 3 H), chlorofluorocarbons (CFCs) and sulfur hexafluoride (SF 6 ) in groundwater were monitored around the facility (from 20 m down to a depth of 500 m), for a period of 5 years. The results show that shallow groundwater inflows into deeper groundwater at depths of between 200–400 m. In addition, the content of shallow groundwater estimated using 3 H and CFC-12 concentrations is up to a maximum of about 50%. This is interpreted as the impact on the groundwater environment caused by construction and operation of a large facility over several years. The concomitant use of 3 H and CFCs is an effective method to determine the extent of shallow groundwater inflow caused by construction of an underground facility. (author)

  3. Motor cognitive processing speed estimation among the primary schoolchildren by deriving prediction formula: A cross-sectional study

    Directory of Open Access Journals (Sweden)

    Vencita Priyanka Aranha

    2017-01-01

    Full Text Available Objectives: Motor cognitive processing speed (MCPS is often reported in terms of reaction time. In spite of being a significant indicator of function, behavior, and performance, MCPS is rarely used in clinics and schools to identify kids with slowed motor cognitive processing. The reason behind this is the lack of availability of convenient formula to estimate MCPS. Thereby, the aim of this study is to estimate the MCPS in the primary schoolchildren. Materials and Methods: Two hundred and four primary schoolchildren, aged 6–12 years, were recruited by the cluster sampling method for this cross-sectional study. MCPS was estimated by the ruler drop method (RDM. By this method, a metallic stainless steel ruler was suspended vertically such that 5 cm graduation of the lower was aligned between the web space of the child's hand, and the child was asked to catch the moving ruler as quickly as possible, once released from the examiner's hand. Distance the ruler traveled was recorded and converted into time, which is the MCPS. Multiple regression analysis of variables was performed to determine the influence of independent variables on MCPS. Results: Mean MCPS of the entire sample of 204 primary schoolchildren is 230.01 ms ± 26.5 standard deviation (95% confidence interval; 226.4–233.7 ms that ranged from 162.9 to 321.6 ms. By stepwise regression analysis, we derived the regression equation, MCPS (ms = 279.625–5.495 × age, with 41.3% (R = 0.413 predictability and 17.1% (R2 = 0.171 and adjusted R2 = 0.166 variability. Conclusion: MCPS prediction formula through RDM in the primary schoolchildren has been established.

  4. On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2016-02-08

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers

  5. Construction cost forecast model : model documentation and technical notes.

    Science.gov (United States)

    2013-05-01

    Construction cost indices are generally estimated with Laspeyres, Paasche, or Fisher indices that allow changes : in the quantities of construction bid items, as well as changes in price to change the cost indices of those items. : These cost indices...

  6. Nuclear plant construction and investment risk

    International Nuclear Information System (INIS)

    Studness, C.M.

    1984-01-01

    Escalated cost estimations, delays and cancellations in nuclear construction have caused a preoccupation with the risks of nuclear power plant construction that dominates utility stock investment, overshadowing increased earnings per share and recent growth in production. The issue will be resolved when increased power demand requires new construction, but the effect has so far been to erode the economic advantage of nuclear power and threaten the ability of utilities to get rate increases high enough to cover their costs. Projected delays and cost escalations and their effects must go into an economic appraisal of the investment risks

  7. Numerical Weather Prediction and Relative Economic Value framework to improve Integrated Urban Drainage- Wastewater management

    DEFF Research Database (Denmark)

    Courdent, Vianney Augustin Thomas

    domains during which the IUDWS can be coupled with the electrical smart grid to optimise its energy consumption. The REV framework was used to determine which decision threshold of the EPS (i.e. number of ensemble members predicting an event) provides the highest benefit for a given situation...... in cities where space is scarce and large-scale construction work a nuisance. This the-sis focuses on flow domain predictions of IUDWS from numerical weather prediction (NWP) to select relevant control objectives for the IUDWS and develops a framework based on the relative economic value (REV) approach...... to evaluate when acting on the forecast is beneficial or not. Rainfall forecasts are extremely valuable for estimating near future storm-water-related impacts on the IUDWS. Therefore, weather radar extrapolation “nowcasts” provide valuable predictions for RTC. However, radar nowcasts are limited...

  8. [Mesothelioma in construction workers: risk estimate, lung content of asbestos fibres, claims for compensation for occupational disease in the Veneto Region mesothelioma register].

    Science.gov (United States)

    Merler, E; Bressan, Vittoria; Somigliana, Anna

    2009-01-01

    Work in the construction industry is causing the highest number of mesotheliomas among the residents of the Veneto Region (north-east Italy, 4,5 million inhabitants). To sum up the results on occurrence, asbestos exposure, lung fibre content analyses, and compensation for occupational disease. Case identification and asbestos exposure classification: active search of mesotheliomas that were diagnosed via histological or cytological examinations occurring between 1987 and 2006; a probability of asbestos exposure was attributed to each case, following interviews with the subjects or their relatives and collection of data on the jobs held over their lifetime. Risk estimate among construction workers: the ratio between cases and person-years, the latter derived from the number of construction workers reported by censuses. Lung content of asbestos fibres: examination of lung specimens by Scanning Electron Microscope to determine number and type of fibres. Claims for compensation and compensation awarded: data obtained from the National Institute for Insurance against Occupational Diseases available for the period 1999-2006. of 952 mesothelioma cases classified as due to asbestos exposure, 251 were assigned to work in the construction industry (21 of which due to domestic of environmental exposures), which gives a rate of 4.1 (95% CI 3.6-4.8) x 10(5) x year among construction workers. The asbestos fibre content detected in the lungs of 11 construction workers showed a mean of 1.7 x 10(6) fibres/g dry tissue (range 350,000-3 million) for fibres > 1 micro, almost exclusively due to amphibole fibres. 62% of the claims for compensation were granted but the percentage fell to less than 40% when claims were submitted by a relative, after the death of the subject. The prevalence of mesothelioma occurring among construction workers is high and is associated with asbestos exposure; the risk is underestimated by the subjects and their relatives. All mesotheliomas occurring among

  9. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  10. A hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments.

    Science.gov (United States)

    Shamim, M. A.; Bray, M.; Ishak, A. M.; Remesan, R.; Han, D.

    2009-09-01

    The importance of solar radiation on earth's surface is depicted in its wide range of applications in the fields of meteorology, agricultural sciences, engineering, hydrology, crop water requirements, climatic changes and energy assessment. It is quite random in nature as it has to go through different processes of assimilation and dispersion while on its way to earth. Compared to other meteorological parameters, solar radiation is quite infrequently measured, for example, the worldwide ratio of stations collecting solar radiation to those collecting temperature is 1:500 (Badescu, 2008). Researchers, therefore, have to rely on indirect techniques of estimation that include nonlinear models, artificial intelligence (e.g. neural networks), remote sensing and numerical weather predictions (NWP). This study proposes a hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments. It uses the PSU/NCAR's Mesoscale Modelling system (MM5) (Grell et al., 1995) to parameterise the cloud effect on extraterrestrial radiation by dividing the atmosphere into four layers of very high (6-12 km), high (3-6 km), medium (1.5-3) and low (0-1.5) altitudes from earth. It is believed that various cloud forms exist within each of these layers. An hourly time series of upper air pressure and relative humidity data sets corresponding to all of these layers is determined for the Brue catchment, southwest UK, using MM5. Cloud Index (CI) was then determined using (Yang and Koike, 2002): 1 p?bi [ (Rh - Rh )] ci =------- max 0.0,---------cri dp pbi - ptipti (1- Rhcri) where, pbi and pti represent the air pressure at the top and bottom of each layer and Rhcri is the critical value of relative humidity at which a certain cloud type is formed. Output from a global clear sky solar radiation model (MRM v-5) (Kambezidis and Psiloglu, 2008) is used along with meteorological datasets of temperature and precipitation and astronomical information. The analysis is aided by the

  11. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    2014-01-01

    either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...

  12. Age prediction formulae from radiographic assessment of skeletal maturation at the knee in an Irish population.

    LENUS (Irish Health Repository)

    O'Connor, Jean E

    2014-01-01

    Age estimation in living subjects is primarily achieved through assessment of a hand-wrist radiograph and comparison with a standard reference atlas. Recently, maturation of other regions of the skeleton has also been assessed in an attempt to refine the age estimates. The current study presents a method to predict bone age directly from the knee in a modern Irish sample. Ten maturity indicators (A-J) at the knee were examined from radiographs of 221 subjects (137 males; 84 females). Each indicator was assigned a maturity score. Scores for indicators A-G, H-J and A-J, respectively, were totalled to provide a cumulative maturity score for change in morphology of the epiphyses (AG), epiphyseal union (HJ) and the combination of both (AJ). Linear regression equations to predict age from the maturity scores (AG, HJ, AJ) were constructed for males and females. For males, equation-AJ demonstrated the greatest predictive capability (R(2)=0.775) while for females equation-HJ had the strongest capacity for prediction (R(2)=0.815). When equation-AJ for males and equation-HJ for females were applied to the current sample, the predicted age of 90% of subjects was within ±1.5 years of actual age for male subjects and within +2.0 to -1.9 years of actual age for female subjects. The regression formulae and associated charts represent the most contemporary method of age prediction currently available for an Irish population, and provide a further technique which can contribute to a multifactorial approach to age estimation in non-adults.

  13. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  14. The impact of composite AUC estimates on the prediction of systemic exposure in toxicology experiments.

    Science.gov (United States)

    Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar

    2015-06-01

    Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.

  15. Estimation of Biomass and Canopy Height in Bermudagrass, Alfalfa, and Wheat Using Ultrasonic, Laser, and Spectral Sensors

    Directory of Open Access Journals (Sweden)

    Jeremy Joshua Pittman

    2015-01-01

    Full Text Available Non-destructive biomass estimation of vegetation has been performed via remote sensing as well as physical measurements. An effective method for estimating biomass must have accuracy comparable to the accepted standard of destructive removal. Estimation or measurement of height is commonly employed to create a relationship between height and mass. This study examined several types of ground-based mobile sensing strategies for forage biomass estimation. Forage production experiments consisting of alfalfa (Medicago sativa L., bermudagrass [Cynodon dactylon (L. Pers.], and wheat (Triticum aestivum L. were employed to examine sensor biomass estimation (laser, ultrasonic, and spectral as compared to physical measurements (plate meter and meter stick and the traditional harvest method (clipping. Predictive models were constructed via partial least squares regression and modeled estimates were compared to the physically measured biomass. Least significant difference separated mean estimates were examined to evaluate differences in the physical measurements and sensor estimates for canopy height and biomass. Differences between methods were minimal (average percent error of 11.2% for difference between predicted values versus machine and quadrat harvested biomass values (1.64 and 4.91 t·ha−1, respectively, except at the lowest measured biomass (average percent error of 89% for harvester and quad harvested biomass < 0.79 t·ha−1 and greatest measured biomass (average percent error of 18% for harvester and quad harvested biomass >6.4 t·ha−1. These data suggest that using mobile sensor-based biomass estimation models could be an effective alternative to the traditional clipping method for rapid, accurate in-field biomass estimation.

  16. Impaired healing of cervical oesophagogastrostomies can be predicted by estimation of gastric serosal blood perfusion by laser Doppler flowmetry.

    Science.gov (United States)

    Pierie, J P; De Graaf, P W; Poen, H; Van der Tweel, I; Obertop, H

    1994-11-01

    To assess the value of relative blood perfusion of the gastric tube in prediction of impaired healing of cervical oesophagogastrostomies. Prospective study. University hospital, The Netherlands. Thirty patients undergoing transhiatal oesophagectomy and partial gastrectomy for cancer of the oesophagus or oesophagogastric junction, with gastric tube reconstruction and cervical oesophagogastrostomy. Operative measurement of gastric blood perfusion at four sites by laser Doppler flowmetry and perfusion of the same sites after construction of the gastric tube expressed as a percentage of preconstruction values. The relative perfusion at the most proximal site of the gastric tube was significantly lower than at the more distal sites (p = 0.001). Nine of 18 patients (50%) in whom the perfusion of the proximal gastric tube was less than 70% of preconstruction values developed an anastomotic stricture, compared with only 1 of 12 patients (8%) with a relative perfusion of 70% or more (p = 0.024). A reduction in perfusion of the gastric tube did not predict leakage. Impaired anastomotic healing is unlikely if relative perfusion is 70% or more of preconstruction values. Perfusion of less than 70% partly predicts the occurrence of anastomotic stricture, but leakage cannot be predicted. Factors other than blood perfusion may have a role in the process of anastomotic healing.

  17. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  18. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  19. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    Science.gov (United States)

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study

    OpenAIRE

    Hippisley-Cox, Julia; Coupland, Carol

    2017-01-01

    Objective: To develop and externally validate risk prediction equations to estimate absolute and conditional survival in patients with colorectal cancer. \\ud \\ud Design: Cohort study.\\ud \\ud Setting: General practices in England providing data for the QResearch database linked to the national cancer registry.\\ud \\ud Participants: 44 145 patients aged 15-99 with colorectal cancer from 947 practices to derive the equations. The equations were validated in 15 214 patients with colorectal cancer ...

  1. Distributed and decentralized state estimation in gas networks as distributed parameter systems.

    Science.gov (United States)

    Ahmadian Behrooz, Hesam; Boozarjomehry, R Bozorgmehry

    2015-09-01

    In this paper, a framework for distributed and decentralized state estimation in high-pressure and long-distance gas transmission networks (GTNs) is proposed. The non-isothermal model of the plant including mass, momentum and energy balance equations are used to simulate the dynamic behavior. Due to several disadvantages of implementing a centralized Kalman filter for large-scale systems, the continuous/discrete form of extended Kalman filter for distributed and decentralized estimation (DDE) has been extended for these systems. Accordingly, the global model is decomposed into several subsystems, called local models. Some heuristic rules are suggested for system decomposition in gas pipeline networks. In the construction of local models, due to the existence of common states and interconnections among the subsystems, the assimilation and prediction steps of the Kalman filter are modified to take the overlapping and external states into account. However, dynamic Riccati equation for each subsystem is constructed based on the local model, which introduces a maximum error of 5% in the estimated standard deviation of the states in the benchmarks studied in this paper. The performance of the proposed methodology has been shown based on the comparison of its accuracy and computational demands against their counterparts in centralized Kalman filter for two viable benchmarks. In a real life network, it is shown that while the accuracy is not significantly decreased, the real-time factor of the state estimation is increased by a factor of 10. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Main summaries of construction of the Rovno NPP third unit

    International Nuclear Information System (INIS)

    Smoktij, I.P.; Tsetsenko, I.K.

    1987-01-01

    The main technical and economic indices attained at the Rovno and Zaporozhe NPPs are considered. The estimated costs of the main installations of the building site, organizations participating in construction, labor forces at different building sections, duration of different stages of construction, actual labor costs associated with the reactor unit, volumes of construction and installation work, the construction schedule for the Rovno Unit-3 are given

  3. Predicting dermal penetration for ToxCast chemicals using in silico estimates for diffusion in combination with physiologically based pharmacokinetic (PBPK) modeling.

    Science.gov (United States)

    Predicting dermal penetration for ToxCast chemicals using in silico estimates for diffusion in combination with physiologically based pharmacokinetic (PBPK) modeling.Evans, M.V., Sawyer, M.E., Isaacs, K.K, and Wambaugh, J.With the development of efficient high-throughput (HT) in ...

  4. A new daily dividend-adjusted index for the Danish stock market, 1985-2002: Construction, statistical properties, and return predictability

    DEFF Research Database (Denmark)

    Belter, Klaus; Engsted, Tom; Tanggaard, Carsten

    2005-01-01

    is given. In the second part of the paper we analyze the time-series properties of daily, weekly, and monthly returns, and we present evidence on predictability of multi-period returns. We also compare stock returns with the returns on long-term bonds and short-term money market instruments (that is......We present a new dividend-adjusted blue chip index for the Danish stock market covering the period 1985-2002. In contrast to other indices on the Danish stock market, the index is calculated on a daily basis. In the first part of the paper a detailed description of the construction of the index...

  5. Using High-Resolution Satellite Aerosol Optical Depth To Estimate Daily PM2.5 Geographical Distribution in Mexico City.

    Science.gov (United States)

    Just, Allan C; Wright, Robert O; Schwartz, Joel; Coull, Brent A; Baccarelli, Andrea A; Tellez-Rojo, Martha María; Moody, Emily; Wang, Yujie; Lyapustin, Alexei; Kloog, Itai

    2015-07-21

    Recent advances in estimating fine particle (PM2.5) ambient concentrations use daily satellite measurements of aerosol optical depth (AOD) for spatially and temporally resolved exposure estimates. Mexico City is a dense megacity that differs from other previously modeled regions in several ways: it has bright land surfaces, a distinctive climatological cycle, and an elevated semi-enclosed air basin with a unique planetary boundary layer dynamic. We extend our previous satellite methodology to the Mexico City area, a region with higher PM2.5 than most U.S. and European urban areas. Using a novel 1 km resolution AOD product from the MODIS instrument, we constructed daily predictions across the greater Mexico City area for 2004-2014. We calibrated the association of AOD to PM2.5 daily using municipal ground monitors, land use, and meteorological features. Predictions used spatial and temporal smoothing to estimate AOD when satellite data were missing. Our model performed well, resulting in an out-of-sample cross-validation R(2) of 0.724. Cross-validated root-mean-squared prediction error (RMSPE) of the model was 5.55 μg/m(3). This novel model reconstructs long- and short-term spatially resolved exposure to PM2.5 for epidemiological studies in Mexico City.

  6. Predicting the outcome of chronic kidney disease by the estimated nephron number: The rationale and design of PRONEP, a prospective, multicenter, observational cohort study

    Directory of Open Access Journals (Sweden)

    Imasawa Toshiyuki

    2012-03-01

    Full Text Available Abstract Background The nephron number is thought to be associated with the outcome of chronic kidney disease (CKD. If the nephron number can be estimated in the clinical setting, it could become a strong tool to predict renal outcome. This study was designed to estimate the nephron number in CKD patients and to establish a method to predict the outcome by using the estimated nephron number. Methods/Design The hypothesis of this study is that the estimated nephron number can predict the outcome of a CKD patient. This will be a multicenter, prospective (minimum 3 and maximum 5 years follow-up study. The subjects will comprise CKD patients aged over 14 years who have undergone a kidney biopsy. From January 2011 to March 2013, we will recruit 600 CKD patients from 10 hospitals belonging to the National Hospital Organization of Japan. The primary parameter for assessment is the composite of total mortality, renal death, cerebro-cardiovascular events, and a 50% reduction in the eGFR. The secondary parameter is the rate of eGFR decline per year. The nephron number will be estimated by the glomerular density in biopsy specimens and the renal cortex volume. This study includes one sub-cohort study to establish the equation to calculate the renal cortex volume. Enrollment will be performed at the time of the kidney biopsy, and the data will consist of a medical interview, ultrasound for measurement of the kidney size, blood or urine test, and the pathological findings of the kidney biopsy. Patients will continue to have medical consultations and receive examinations and/or treatment as usual. The data from the patients will be collected once a year after the kidney biopsy until March 2016. All data using this study are easily obtained in routine clinical practice. Discussion This study includes the first trials to estimate the renal cortex volume and nephron number in the general clinical setting. Furthermore, this is the first prospective study to

  7. Predictive models of glucose control: roles for glucose-sensing neurones

    Science.gov (United States)

    Kosse, C.; Gonzalez, A.; Burdakov, D.

    2018-01-01

    The brain can be viewed as a sophisticated control module for stabilizing blood glucose. A review of classical behavioural evidence indicates that central circuits add predictive (feedforward/anticipatory) control to the reactive (feedback/compensatory) control by peripheral organs. The brain/cephalic control is constructed and engaged, via associative learning, by sensory cues predicting energy intake or expenditure (e.g. sight, smell, taste, sound). This allows rapidly measurable sensory information (rather than slowly generated internal feedback signals, e.g. digested nutrients) to control food selection, glucose supply for fight-or-flight responses or preparedness for digestion/absorption. Predictive control is therefore useful for preventing large glucose fluctuations. We review emerging roles in predictive control of two classes of widely projecting hypothalamic neurones, orexin/hypocretin (ORX) and melanin-concentrating hormone (MCH) cells. Evidence is cited that ORX neurones (i) are activated by sensory cues (e.g. taste, sound), (ii) drive hepatic production, and muscle uptake, of glucose, via sympathetic nerves, (iii) stimulate wakefulness and exploration via global brain projections and (iv) are glucose-inhibited. MCH neurones are (i) glucose-excited, (ii) innervate learning and reward centres to promote synaptic plasticity, learning and memory and (iii) are critical for learning associations useful for predictive control (e.g. using taste to predict nutrient value of food). This evidence is unified into a model for predictive glucose control. During associative learning, inputs from some glucose-excited neurones may promote connections between the ‘fast’ senses and reward circuits, constructing neural shortcuts for efficient action selection. In turn, glucose-inhibited neurones may engage locomotion/exploration and coordinate the required fuel supply. Feedback inhibition of the latter neurones by glucose would ensure that glucose fluxes they

  8. Predictive models of glucose control: roles for glucose-sensing neurones.

    Science.gov (United States)

    Kosse, C; Gonzalez, A; Burdakov, D

    2015-01-01

    The brain can be viewed as a sophisticated control module for stabilizing blood glucose. A review of classical behavioural evidence indicates that central circuits add predictive (feedforward/anticipatory) control to the reactive (feedback/compensatory) control by peripheral organs. The brain/cephalic control is constructed and engaged, via associative learning, by sensory cues predicting energy intake or expenditure (e.g. sight, smell, taste, sound). This allows rapidly measurable sensory information (rather than slowly generated internal feedback signals, e.g. digested nutrients) to control food selection, glucose supply for fight-or-flight responses or preparedness for digestion/absorption. Predictive control is therefore useful for preventing large glucose fluctuations. We review emerging roles in predictive control of two classes of widely projecting hypothalamic neurones, orexin/hypocretin (ORX) and melanin-concentrating hormone (MCH) cells. Evidence is cited that ORX neurones (i) are activated by sensory cues (e.g. taste, sound), (ii) drive hepatic production, and muscle uptake, of glucose, via sympathetic nerves, (iii) stimulate wakefulness and exploration via global brain projections and (iv) are glucose-inhibited. MCH neurones are (i) glucose-excited, (ii) innervate learning and reward centres to promote synaptic plasticity, learning and memory and (iii) are critical for learning associations useful for predictive control (e.g. using taste to predict nutrient value of food). This evidence is unified into a model for predictive glucose control. During associative learning, inputs from some glucose-excited neurones may promote connections between the 'fast' senses and reward circuits, constructing neural shortcuts for efficient action selection. In turn, glucose-inhibited neurones may engage locomotion/exploration and coordinate the required fuel supply. Feedback inhibition of the latter neurones by glucose would ensure that glucose fluxes they stimulate

  9. TRM performance prediction in Yucca Mountain welded tuff from linear cutter tests

    International Nuclear Information System (INIS)

    Gertsch, R.; Ozdemir, L.; Gertsch, L.

    1992-01-01

    Performance predictions were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah Spring welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. Guidelines were developed for the optimal design of the TBM cutterhead to achieve high production rates at the lowest possible excavation costs. The results show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The results also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength

  10. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  11. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  12. Estimation of State of Charge for Two Types of Lithium-Ion Batteries by Nonlinear Predictive Filter for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yin Hua

    2015-04-01

    Full Text Available Estimation of state of charge (SOC is of great importance for lithium-ion (Li-ion batteries used in electric vehicles. This paper presents a state of charge estimation method using nonlinear predictive filter (NPF and evaluates the proposed method on the lithium-ion batteries with different chemistries. Contrary to most conventional filters which usually assume a zero mean white Gaussian process noise, the advantage of NPF is that the process noise in NPF is treated as an unknown model error and determined as a part of the solution without any prior assumption, and it can take any statistical distribution form, which improves the estimation accuracy. In consideration of the model accuracy and computational complexity, a first-order equivalent circuit model is applied to characterize the battery behavior. The experimental test is conducted on the LiCoO2 and LiFePO4 battery cells to validate the proposed method. The results show that the NPF method is able to accurately estimate the battery SOC and has good robust performance to the different initial states for both cells. Furthermore, the comparison study between NPF and well-established extended Kalman filter for battery SOC estimation indicates that the proposed NPF method has better estimation accuracy and converges faster.

  13. A Conceptual Grey Analysis Method for Construction Projects

    Directory of Open Access Journals (Sweden)

    Maria Mikela Chatzimichailidou

    2015-05-01

    Full Text Available Concerning engineers, project management is a crucial field of research and development. Projects of high uncertainty and scale are characterized by risk, primarily related to their completion time. Thus, safe duration estimations, throughout the planning of a project, are a key objective for project managers. However, traditional linear approaches fail to include and sufficiently serve the dynamic nature of activities duration. On this ground, attention should be paid to designing and implementing methodologies that approximate the duration of the activities during the phase of planning and scheduling too. The grey analysis mathematical modeling seems to gain grounds, since it gradually becomes a well-adapted and up-to-date technique for numerous scientific sectors. This paper examines the contribution of the logic behind the aforementioned analysis, aiming to predict possible future divergences of task durations in big construction projects. Based on time observations of critical instances, a conceptual method is developed for making duration estimations and communicating deviations from the original schedule, in a way that approximations will fit reality better. The whole procedure endeavors to investigate the decrease of uncertainty, regarding project completion time and reduce, up to a scale, a possible inaccurate estimation of a project manager. The utmost effort is about exploiting the gained experience and eliminating the “hedgehog syndrome”. This is attainable by designing a reliable, easily updated, and readable information system. An enlightening example is to be found in the last section.

  14. PCCE-A Predictive Code for Calorimetric Estimates in actively cooled components affected by pulsed power loads

    International Nuclear Information System (INIS)

    Agostinetti, P.; Palma, M. Dalla; Fantini, F.; Fellin, F.; Pasqualotto, R.

    2011-01-01

    The analytical interpretative models for calorimetric measurements currently available in the literature can consider close systems in steady-state and transient conditions, or open systems but only in steady-state conditions. The PCCE code (Predictive Code for Calorimetric Estimations), here presented, introduces some novelties. In fact, it can simulate with an analytical approach both the heated component and the cooling circuit, evaluating the heat fluxes due to conductive and convective processes both in steady-state and transient conditions. The main goal of this code is to model heating and cooling processes in actively cooled components of fusion experiments affected by high pulsed power loads, that are not easily analyzed with purely numerical approaches (like Finite Element Method or Computational Fluid Dynamics). A dedicated mathematical formulation, based on concentrated parameters, has been developed and is here described in detail. After a comparison and benchmark with the ANSYS commercial code, the PCCE code is applied to predict the calorimetric parameters in simple scenarios of the SPIDER experiment.

  15. Mortality Predicted Accuracy for Hepatocellular Carcinoma Patients with Hepatic Resection Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Herng-Chia Chiu

    2013-01-01

    Full Text Available The aim of this present study is firstly to compare significant predictors of mortality for hepatocellular carcinoma (HCC patients undergoing resection between artificial neural network (ANN and logistic regression (LR models and secondly to evaluate the predictive accuracy of ANN and LR in different survival year estimation models. We constructed a prognostic model for 434 patients with 21 potential input variables by Cox regression model. Model performance was measured by numbers of significant predictors and predictive accuracy. The results indicated that ANN had double to triple numbers of significant predictors at 1-, 3-, and 5-year survival models as compared with LR models. Scores of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC of 1-, 3-, and 5-year survival estimation models using ANN were superior to those of LR in all the training sets and most of the validation sets. The study demonstrated that ANN not only had a great number of predictors of mortality variables but also provided accurate prediction, as compared with conventional methods. It is suggested that physicians consider using data mining methods as supplemental tools for clinical decision-making and prognostic evaluation.

  16. Mortality Predicted Accuracy for Hepatocellular Carcinoma Patients with Hepatic Resection Using Artificial Neural Network

    Science.gov (United States)

    Chiu, Herng-Chia; Ho, Te-Wei; Lee, King-Teh; Chen, Hong-Yaw; Ho, Wen-Hsien

    2013-01-01

    The aim of this present study is firstly to compare significant predictors of mortality for hepatocellular carcinoma (HCC) patients undergoing resection between artificial neural network (ANN) and logistic regression (LR) models and secondly to evaluate the predictive accuracy of ANN and LR in different survival year estimation models. We constructed a prognostic model for 434 patients with 21 potential input variables by Cox regression model. Model performance was measured by numbers of significant predictors and predictive accuracy. The results indicated that ANN had double to triple numbers of significant predictors at 1-, 3-, and 5-year survival models as compared with LR models. Scores of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) of 1-, 3-, and 5-year survival estimation models using ANN were superior to those of LR in all the training sets and most of the validation sets. The study demonstrated that ANN not only had a great number of predictors of mortality variables but also provided accurate prediction, as compared with conventional methods. It is suggested that physicians consider using data mining methods as supplemental tools for clinical decision-making and prognostic evaluation. PMID:23737707

  17. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test.

    Science.gov (United States)

    Stuiver, Martijn M; Kampshoff, Caroline S; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J M; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M

    2017-11-01

    To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo 2peak ) and peak power output (W peak ). Cross-sectional study. Multicenter. Cancer survivors (N=283) in 2 randomized controlled exercise trials. Not applicable. Prediction model accuracy was assessed by intraclass correlation coefficients (ICCs) and limits of agreement (LOA). Multiple linear regression was used for model extension. Clinical performance was judged by the percentage of accurate endurance exercise prescriptions. ICCs of SRT-predicted Vo 2peak and W peak with these values as obtained by the cardiopulmonary exercise test were .61 and .73, respectively, using the previously published prediction models. 95% LOA were ±705mL/min with a bias of 190mL/min for Vo 2peak and ±59W with a bias of 5W for W peak . Modest improvements were obtained by adding body weight and sex to the regression equation for the prediction of Vo 2peak (ICC, .73; 95% LOA, ±608mL/min) and by adding age, height, and sex for the prediction of W peak (ICC, .81; 95% LOA, ±48W). Accuracy of endurance exercise prescription improved from 57% accurate prescriptions to 68% accurate prescriptions with the new prediction model for W peak . Predictions of Vo 2peak and W peak based on the SRT are adequate at the group level, but insufficiently accurate in individual patients. The multivariable prediction model for W peak can be used cautiously (eg, supplemented with a Borg score) to aid endurance exercise prescription. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  18. MCDIRC: A model to estimate creep produced by microcracking around a shaft in intact rock

    International Nuclear Information System (INIS)

    Wilkins, B.J.S.; Rigby, G.L.

    1989-12-01

    Atomic Energy of Canada Limited (AECL) is studying the concept of disposing of nuclear fuel waste in a vault in plutonic rock. Models are being developed to predict the mechanical behaviour of the rock in response to excavation and heat from the waste. The dominant mechanism of deformation at temperatures below 150 degrees C is microcracking, which results in rock creep and a decrease in rock strength. A model has been constructed to consider the perturbation of the stress state of intact rock by a vertical cylindrical opening. Slow crack-growth data are used to estimate time-dependent changes in rock strength, from which the movement (creep) of the opening wall and radial strain in the rock mass can be estimated

  19. Modeling of Construction Cost of Villas in Oman

    Directory of Open Access Journals (Sweden)

    MA Al-Mohsin

    2014-06-01

    Full Text Available In this research, a model for estimating construction cost of villas is presented. The model takes into account four major factors affecting villa's cost, namely: built up area, number of toilets, number of bedrooms and the number of stories. A field survey was conducted to collect information required for such model using data collection form designed by the researchers. Information about 150 villas was collected from six well experienced consultants in the field of villa design and supervision in Oman. Collected data was analyzed to develop suggested model which consists of two main levels of estimate. The first level is at the conceptual design stage where the client presents his/her need of space and basic information about the available plot for construction. The second level of cost estimation is carried out after the preliminary design stage where the client has to decide on the finishes and type of structure. At the second level of estimation, the client should be able to decide whether to precede for construction or not, according to his/her budget. The model is general and can be used anywhere and was validated for accepted degree of confidence using the actual cost of the 112 executed villa projects in Oman. The villas included in this study were owned by clients from both high and low income brackets and had different types of finishing material. The developed equations showed good correlation between the selected variables and the actual cost with R2  = 0.79 in the case of conceptual estimate and R2  = 0.601 for preliminary estimate.

  20. Emotional Intelligence and Social Interest : are they related constructs?

    OpenAIRE

    Chamarro Lusar, Andrés

    2012-01-01

    In the last 15 years, a new psychological construct has emerged in the field of psychology: Emotional Intelligence. Some models of Emotional Intelligence bear ressemblence with aspects of one of the core constructs of Adlerian Psychology: Social Interest. The authors investigated, if both constructs are also empirically related and which is their capacity to predict psychiatric symptoms and antisocial behavior. Results indicate that Social Interest and Emotional Intelligence are empirically d...

  1. PREDICTIVE ACCURACY OF TRANSCEREBELLAR DIAMETER IN COMPARISON WITH OTHER FOETAL BIOMETRIC PARAMETERS FOR GESTATIONAL AGE ESTIMATION AMONG PREGNANT NIGERIAN WOMEN.

    Science.gov (United States)

    Adeyekun, A A; Orji, M O

    2014-04-01

    To compare the predictive accuracy of foetal trans-cerebellar diameter (TCD) with those of other biometric parameters in the estimation of gestational age (GA). A cross-sectional study. The University of Benin Teaching Hospital, Nigeria. Four hundred and fifty healthy singleton pregnant women, between 14-42 weeks gestation. Trans-cerebellar diameter (TCD), biparietal diameter (BPD), femur length (FL), abdominal circumference (AC) values across the gestational age range studied. Correlation and predictive values of TCD compared to those of other biometric parameters. The range of values for TCD was 11.9 - 59.7mm (mean = 34.2 ± 14.1mm). TCD correlated more significantly with menstrual age compared with other biometric parameters (r = 0.984, p = 0.000). TCD had a higher predictive accuracy of 96.9% ± 12 days), BPD (93.8% ± 14.1 days). AC (92.7% ± 15.3 days). TCD has a stronger predictive accuracy for gestational age compared to other routinely used foetal biometric parameters among Nigerian Africans.

  2. Principles for guiding the ONKALO prediction-outcome studies

    International Nuclear Information System (INIS)

    Andersson, J.; Hudson, J.A.; Anttila, P.; Koskinen, L.; Pitkaenen, P.; Hautojaervi, A.; Wikstroem, L.

    2005-09-01

    This document provides the necessary foundation for establishing the strategy for the Prediction-Outcome studies currently being conducted by the ONKALO Modelling Task Force (OMTF) during the construction of the ONKALO ramp. These studies relate to the geology, rock mechanics, hydrogeology and hydrogeochemistry. The purpose of the Prediction-Outcome campaign currently underway in the ONKALO ramp tunnel is to optimize Posiva's ability to predict rock conditions ahead of the excavation face. The aim of the work is: to enhance confidence in ability to predict rock conditions in general - and especially for the repository volumes; (later) testing and verification of repository design rules as it would not be possible to make too many additional boreholes in repository volume; and to support the ongoing construction work and make possible the application of the CEIC method. The document also presents current plans for at what stages of the ONKALO construction predictions and outcome assessments will be made as well as current plans for what properties and impacts will be predicted. These plans will evidently be subject to revision during the course of the work. (orig.)

  3. Predicting Loss-of-Control Boundaries Toward a Piloting Aid

    Science.gov (United States)

    Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.

  4. Predictability of Stock Returns

    Directory of Open Access Journals (Sweden)

    Ahmet Sekreter

    2017-06-01

    Full Text Available Predictability of stock returns has been shown by empirical studies over time. This article collects the most important theories on forecasting stock returns and investigates the factors that affecting behavior of the stocks’ prices and the market as a whole. Estimation of the factors and the way of estimation are the key issues of predictability of stock returns.

  5. Towards the prediction of pre-mining stresses in the European continent. [Estimates of vertical and probable maximum lateral stress in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Blackwood, R. L.

    1980-05-15

    There are now available sufficient data from in-situ, pre-mining stress measurements to allow a first attempt at predicting the maximum stress magnitudes likely to occur in a given mining context. The sub-horizontal (lateral) stress generally dominates the stress field, becoming critical to stope stability in many cases. For cut-and-fill mining in particular, where developed fill pressures are influenced by lateral displacement of pillars or stope backs, extraction maximization planning by mathematical modelling techniques demands the best available estimate of pre-mining stresses. While field measurements are still essential for this purpose, in the present paper it is suggested that the worst stress case can be predicted for preliminary design or feasibility study purposes. In the Eurpoean continent the vertical component of pre-mining stress may be estimated by adding 2 MPa to the pressure due to overburden weight. The maximum lateral stress likely to be encountered is about 57 MPa at depths of some 800m to 1000m below the surface.

  6. Determination of development factors of the construction market

    Science.gov (United States)

    Kozlova, Olga

    2017-10-01

    Field of housing construction constantly needs measures of business climate improvement. Provision of housing for citizens remains relatively low. Recently, state has been developing a new set of measures for shared-equity construction improvement. This area has a particular significance and scales for our country. Number of defrauded shareholders in the past allows estimate scales of losses both in the form of unfinished objects in the past and reputation losses of this direction of construction. This article proposes measures which are designed to form an informational base for forecasts of the development of construction and provide a positive result from the applied measures.

  7. Management of construction safety at RR site

    International Nuclear Information System (INIS)

    Pathak, B.C.; Khatsuriya, J.R.

    2016-01-01

    Construction industries are one of the most hazardous industries and hence, promotion of safety remains one of the greatest challenges facing construction industry today. According to ILO estimates: Each year at least 60,000 fatal accidents occur on construction sites around the world or one fatal accident every ten minutes. One in six fatal accidents at work occurs on a construction site. In industrialized countries, as many as 25-40 per cent of work related deaths occur on construction sites, even though the sector employs only 6-10 per cent of the workforce. The number of fatalities occurring from construction work in India is also quite disturbing. Though, the fall of person from height and through openings were the major causes for fatal /serious accidents, the risk of fatal accident involving material handling equipment, either during handling or its maintenance is also significantly high due to use of large number of material handling equipments during construction work. (author)

  8. The Cognitive Estimation Task Is Nonunitary: Evidence for Multiple Magnitude Representation Mechanisms Among Normative and ADHD College Students

    Directory of Open Access Journals (Sweden)

    Sarit Ashkenazi

    2017-02-01

    Full Text Available There is a current debate on whether the cognitive system has a shared representation for all magnitudes or whether there are unique representations. To investigate this question, we used the Biber cognitive estimation task. In this task, participants were asked to provide estimates for questions such as, “How many sticks of spaghetti are in a package?” The task uses different estimation categories (e.g., time, numerical quantity, distance, and weight to look at real-life magnitude representations. Experiment 1 assessed (N = 95 a Hebrew version of the Biber Cognitive Estimation Task and found that different estimation categories had different relations, for example, weight, time, and distance shared variance, but numerical estimation did not. We suggest that numerical estimation does not require the use of measurement in units, hence, it represents a more “pure” numerical estimation. Experiment 2 found that different factors explain individual abilities in different estimation categories. For example, numerical estimation was predicted by preverbal innate quantity understanding (approximate number sense and working memory, whereas time estimations were supported by IQ. These results demonstrate that cognitive estimation is not a unified construct.

  9. Predicting waist circumference from body mass index

    Directory of Open Access Journals (Sweden)

    Bozeman Samuel R

    2012-08-01

    Full Text Available Abstract Background Being overweight or obese increases risk for cardiometabolic disorders. Although both body mass index (BMI and waist circumference (WC measure the level of overweight and obesity, WC may be more important because of its closer relationship to total body fat. Because WC is typically not assessed in clinical practice, this study sought to develop and verify a model to predict WC from BMI and demographic data, and to use the predicted WC to assess cardiometabolic risk. Methods Data were obtained from the Third National Health and Nutrition Examination Survey (NHANES and the Atherosclerosis Risk in Communities Study (ARIC. We developed linear regression models for men and women using NHANES data, fitting waist circumference as a function of BMI. For validation, those regressions were applied to ARIC data, assigning a predicted WC to each individual. We used the predicted WC to assess abdominal obesity and cardiometabolic risk. Results The model correctly classified 88.4% of NHANES subjects with respect to abdominal obesity. Median differences between actual and predicted WC were − 0.07 cm for men and 0.11 cm for women. In ARIC, the model closely estimated the observed WC (median difference: − 0.34 cm for men, +3.94 cm for women, correctly classifying 86.1% of ARIC subjects with respect to abdominal obesity and 91.5% to 99.5% as to cardiometabolic risk. The model is generalizable to Caucasian and African-American adult populations because it was constructed from data on a large, population-based sample of men and women in the United States, and then validated in a population with a larger representation of African-Americans. Conclusions The model accurately estimates WC and identifies cardiometabolic risk. It should be useful for health care practitioners and public health officials who wish to identify individuals and populations at risk for cardiometabolic disease when WC data are unavailable.

  10. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    Science.gov (United States)

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. State estimation in networked systems

    NARCIS (Netherlands)

    Sijs, J.

    2012-01-01

    This thesis considers state estimation strategies for networked systems. State estimation refers to a method for computing the unknown state of a dynamic process by combining sensor measurements with predictions from a process model. The most well known method for state estimation is the Kalman

  12. Using Purchasing Power Parity to Assess Construction Productivity

    Directory of Open Access Journals (Sweden)

    Rick Best

    2010-12-01

    Full Text Available For many reasons comparing construction productivity between countries is a difficult task. One key problem is that of converting construction costs to a common currency. This problem can be overcome relatively simply by using a basket of construction materials and labour, termed a BLOC (Basket of Locally Obtained Commodities, as a unit of construction cost. Average BLOC costs in each location are calculated from data obtained from a number of sources (quantity surveyors, estimators. Typical building costs obtained from published construction cost data are expressed in BLOC equivalents. Lower BLOC equivalents represent higher productivity as other inputs (largely materials are constant. The method provides a relatively simple and direct method for comparing productivity between different locations.

  13. Risk Prediction of New Adjacent Vertebral Fractures After PVP for Patients with Vertebral Compression Fractures: Development of a Prediction Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Bin-Yan; He, Shi-Cheng; Zhu, Hai-Dong [Southeast University, Department of Radiology, Medical School, Zhongda Hospital (China); Wu, Chun-Gen [Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Department of Diagnostic and Interventional Radiology (China); Fang, Wen; Chen, Li; Guo, Jin-He; Deng, Gang; Zhu, Guang-Yu; Teng, Gao-Jun, E-mail: gjteng@vip.sina.com [Southeast University, Department of Radiology, Medical School, Zhongda Hospital (China)

    2017-02-15

    PurposeWe aim to determine the predictors of new adjacent vertebral fractures (AVCFs) after percutaneous vertebroplasty (PVP) in patients with osteoporotic vertebral compression fractures (OVCFs) and to construct a risk prediction score to estimate a 2-year new AVCF risk-by-risk factor condition.Materials and MethodsPatients with OVCFs who underwent their first PVP between December 2006 and December 2013 at Hospital A (training cohort) and Hospital B (validation cohort) were included in this study. In training cohort, we assessed the independent risk predictors and developed the probability of new adjacent OVCFs (PNAV) score system using the Cox proportional hazard regression analysis. The accuracy of this system was then validated in both training and validation cohorts by concordance (c) statistic.Results421 patients (training cohort: n = 256; validation cohort: n = 165) were included in this study. In training cohort, new AVCFs after the first PVP treatment occurred in 33 (12.9%) patients. The independent risk factors were intradiscal cement leakage and preexisting old vertebral compression fracture(s). The estimated 2-year absolute risk of new AVCFs ranged from less than 4% in patients with neither independent risk factors to more than 45% in individuals with both factors.ConclusionsThe PNAV score is an objective and easy approach to predict the risk of new AVCFs.

  14. Risk Prediction of New Adjacent Vertebral Fractures After PVP for Patients with Vertebral Compression Fractures: Development of a Prediction Model

    International Nuclear Information System (INIS)

    Zhong, Bin-Yan; He, Shi-Cheng; Zhu, Hai-Dong; Wu, Chun-Gen; Fang, Wen; Chen, Li; Guo, Jin-He; Deng, Gang; Zhu, Guang-Yu; Teng, Gao-Jun

    2017-01-01

    PurposeWe aim to determine the predictors of new adjacent vertebral fractures (AVCFs) after percutaneous vertebroplasty (PVP) in patients with osteoporotic vertebral compression fractures (OVCFs) and to construct a risk prediction score to estimate a 2-year new AVCF risk-by-risk factor condition.Materials and MethodsPatients with OVCFs who underwent their first PVP between December 2006 and December 2013 at Hospital A (training cohort) and Hospital B (validation cohort) were included in this study. In training cohort, we assessed the independent risk predictors and developed the probability of new adjacent OVCFs (PNAV) score system using the Cox proportional hazard regression analysis. The accuracy of this system was then validated in both training and validation cohorts by concordance (c) statistic.Results421 patients (training cohort: n = 256; validation cohort: n = 165) were included in this study. In training cohort, new AVCFs after the first PVP treatment occurred in 33 (12.9%) patients. The independent risk factors were intradiscal cement leakage and preexisting old vertebral compression fracture(s). The estimated 2-year absolute risk of new AVCFs ranged from less than 4% in patients with neither independent risk factors to more than 45% in individuals with both factors.ConclusionsThe PNAV score is an objective and easy approach to predict the risk of new AVCFs.

  15. Estimation of the monthly average daily solar radiation using geographic information system and advanced case-based reasoning.

    Science.gov (United States)

    Koo, Choongwan; Hong, Taehoon; Lee, Minhyun; Park, Hyo Seon

    2013-05-07

    The photovoltaic (PV) system is considered an unlimited source of clean energy, whose amount of electricity generation changes according to the monthly average daily solar radiation (MADSR). It is revealed that the MADSR distribution in South Korea has very diverse patterns due to the country's climatic and geographical characteristics. This study aimed to develop a MADSR estimation model for the location without the measured MADSR data, using an advanced case based reasoning (CBR) model, which is a hybrid methodology combining CBR with artificial neural network, multiregression analysis, and genetic algorithm. The average prediction accuracy of the advanced CBR model was very high at 95.69%, and the standard deviation of the prediction accuracy was 3.67%, showing a significant improvement in prediction accuracy and consistency. A case study was conducted to verify the proposed model. The proposed model could be useful for owner or construction manager in charge of determining whether or not to introduce the PV system and where to install it. Also, it would benefit contractors in a competitive bidding process to accurately estimate the electricity generation of the PV system in advance and to conduct an economic and environmental feasibility study from the life cycle perspective.

  16. Cost function estimation

    DEFF Research Database (Denmark)

    Andersen, C K; Andersen, K; Kragh-Sørensen, P

    2000-01-01

    on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...

  17. Construction Tender Subcontract Selection using Case-based Reasoning

    Directory of Open Access Journals (Sweden)

    Due Luu

    2012-11-01

    Full Text Available Obtaining competitive quotations from suitably qualified subcontractors at tender tim n significantly increase the chance of w1nmng a construction project. Amidst an increasingly growing trend to subcontracting in Australia, selecting appropriate subcontractors for a construction project can be a daunting task requiring the analysis of complex and dynamic criteria such as past performance, suitable experience, track record of competitive pricing, financial stability and so on. Subcontractor selection is plagued with uncertainty and vagueness and these conditions are difficul_t o represent in generalised sets of rules. DeciSIOns pertaining to the selection of subcontr:act?s tender time are usually based on the mtu1t1onand past experience of construction estimators. Case-based reasoning (CBR may be an appropriate method of addressing the chal_lenges of selecting subcontractors because CBR 1s able to harness the experiential knowledge of practitioners. This paper reviews the practicality and suitability of a CBR approach for subcontractor tender selection through the development of a prototype CBR procurement advisory system. In this system, subcontractor selection cases are represented by a set of attributes elicited from experienced construction estimators. The results indicate that CBR can enhance the appropriateness of the selection of subcontractors for construction projects.

  18. Age estimation in the living

    DEFF Research Database (Denmark)

    Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels

    2015-01-01

    A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...... in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation...

  19. Cost Based Value Stream Mapping as a Sustainable Construction Tool for Underground Pipeline Construction Projects

    Directory of Open Access Journals (Sweden)

    Murat Gunduz

    2017-11-01

    Full Text Available This paper deals with application of Value Stream Mapping (VSM as a sustainable construction tool on a real construction project of installation of underground pipelines. VSM was adapted to reduce the high percentage of non-value-added activities and time wastes during each construction stage and the paper searched for an effective way to consider the cost for studied construction of underground pipeline. This paper is unique in its way that it adopts cost implementation of VSM to improve the productivity in underground pipeline projects. The data was observed and collected from site during construction, indicating the cycle time, value added and non-value added of each construction stage. The current state was built based on these details. This was an eye-opening exercise and a process management tool as a trigger for improvement. After the current state assessment, a future state is attempted by Value Stream Mapping tool balancing the resources using a Line of Balance (LOB technique. Moreover, a sustainable cost estimation model was developed during current state and future state to calculate the cost of underground pipeline construction. The result shows a cost reduction of 20.8% between current and future states. This reflects the importance of the cost based Value Stream Mapping in construction as a sustainable measurement tool. This new tool could be utilized in construction industry to add the sustainability and effective cost management.

  20. Niche construction game cancer cells play.

    Science.gov (United States)

    Bergman, Aviv; Gligorijevic, Bojana

    2015-10-01

    Niche construction concept was originally defined in evolutionary biology as the continuous interplay between natural selection via environmental conditions and the modification of these conditions by the organism itself. Processes unraveling during cancer metastasis include construction of niches, which cancer cells use towards more efficient survival, transport into new environments and preparation of the remote sites for their arrival. Many elegant experiments were done lately illustrating, for example, the premetastatic niche construction, but there is practically no mathematical modeling done which would apply the niche construction framework. To create models useful for understanding niche construction role in cancer progression, we argue that a) genetic, b) phenotypic and c) ecological levels are to be included. While the model proposed here is phenomenological in its current form, it can be converted into a predictive outcome model via experimental measurement of the model parameters. Here we give an overview of an experimentally formulated problem in cancer metastasis and propose how niche construction framework can be utilized and broadened to model it. Other life science disciplines, such as host-parasite coevolution, may also benefit from niche construction framework adaptation, to satisfy growing need for theoretical considerations of data collected by experimental biology.