Energy Technology Data Exchange (ETDEWEB)
Dettmers, Dana Lee; Eide, Steven Arvid
2002-10-01
An analysis of completed decommissioning projects is used to construct predictive estimates for worker exposure to radioactivity during decommissioning activities. The preferred organizational method for the completed decommissioning project data is to divide the data by type of facility, whether decommissioning was performed on part of the facility or the complete facility, and the level of radiation within the facility prior to decommissioning (low, medium, or high). Additional data analysis shows that there is not a downward trend in worker exposure data over time. Also, the use of a standard estimate for worker exposure to radioactivity may be a best estimate for low complete storage, high partial storage, and medium reactor facilities; a conservative estimate for some low level of facility radiation facilities (reactor complete, research complete, pits/ponds, other), medium partial process facilities, and high complete research facilities; and an underestimate for the remaining facilities. Limited data are available to compare different decommissioning alternatives, so the available data are reported and no conclusions can been drawn. It is recommended that all DOE sites and the NRC use a similar method to document worker hours, worker exposure to radiation (person-rem), and standard industrial accidents, injuries, and deaths for all completed decommissioning activities.
Predictable grammatical constructions
DEFF Research Database (Denmark)
Lucas, Sandra
2015-01-01
My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting that these p......My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting...... that these predictable units should be considered grammatical constructions on a par with the nonpredictable constructions. Frequency has usually been seen as the only possible argument speaking in favor of viewing some formally and semantically fully predictable units as grammatical constructions. However, this paper...... semantically and formally predictable. Despite this difference, [méllo INF], like the other future periphrases, seems to be highly entrenched in the cognition (and grammar) of Early Medieval Greek language users, and consequently a grammatical construction. The syntactic evidence speaking in favor of [méllo...
CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE
Directory of Open Access Journals (Sweden)
Nino Serdarevic
2012-10-01
Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.
Adjusting estimative prediction limits
Masao Ueki; Kaoru Fueda
2007-01-01
This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.
Directory of Open Access Journals (Sweden)
Xiaomi Wang
2017-02-01
Full Text Available The visible and near-infrared (VNIR spectroscopy prediction model is an effective tool for the prediction of soil organic matter (SOM content. The predictive accuracy of the VNIR model is highly dependent on the selection of the calibration set. However, conventional methods for selecting the calibration set for constructing the VNIR prediction model merely consider either the gradients of SOM or the soil VNIR spectra and neglect the influence of environmental variables. However, soil samples generally present a strong spatial variability, and, thus, the relationship between the SOM content and VNIR spectra may vary with respect to locations and surrounding environments. Hence, VNIR prediction models based on conventional calibration set selection methods would be biased, especially for estimating highly spatially variable soil content (e.g., SOM. To equip the calibration set selection method with the ability to consider SOM spatial variation and environmental influence, this paper proposes an improved method for selecting the calibration set. The proposed method combines the improved multi-variable association relationship clustering mining (MVARC method and the Rank–Kennard–Stone (Rank-KS method in order to synthetically consider the SOM gradient, spectral information, and environmental variables. In the proposed MVARC-R-KS method, MVARC integrates the Apriori algorithm, a density-based clustering algorithm, and the Delaunay triangulation. The MVARC method is first utilized to adaptively mine clustering distribution zones in which environmental variables exert a similar influence on soil samples. The feasibility of the MVARC method is proven by conducting an experiment on a simulated dataset. The calibration set is evenly selected from the clustering zones and the remaining zone by using the Rank-KS algorithm in order to avoid a single property in the selected calibration set. The proposed MVARC-R-KS approach is applied to select a
CONSTRUCTION COST PREDICTION USING NEURAL NETWORKS
Directory of Open Access Journals (Sweden)
Smita K Magdum
2017-10-01
Full Text Available Construction cost prediction is important for construction firms to compete and grow in the industry. Accurate construction cost prediction in the early stage of project is important for project feasibility studies and successful completion. There are many factors that affect the cost prediction. This paper presents construction cost prediction as multiple regression model with cost of six materials as independent variables. The objective of this paper is to develop neural networks and multilayer perceptron based model for construction cost prediction. Different models of NN and MLP are developed with varying hidden layer size and hidden nodes. Four artificial neural network models and twelve multilayer perceptron models are compared. MLP and NN give better results than statistical regression method. As compared to NN, MLP works better on training dataset but fails on testing dataset. Five activation functions are tested to identify suitable function for the problem. ‘elu' transfer function gives better results than other transfer function.
A novel methodology to estimate the evolution of construction waste in construction sites.
Katz, Amnon; Baum, Hadassa
2011-02-01
This paper focuses on the accumulation of construction waste generated throughout the erection of new residential buildings. A special methodology was developed in order to provide a model that will predict the flow of construction waste. The amount of waste and its constituents, produced on 10 relatively large construction sites (7000-32,000 m(2) of built area) was monitored periodically for a limited time. A model that predicts the accumulation of construction waste was developed based on these field observations. According to the model, waste accumulates in an exponential manner, i.e. smaller amounts are generated during the early stages of construction and increasing amounts are generated towards the end of the project. The total amount of waste from these sites was estimated at 0.2m(3) per 1m(2) floor area. A good correlation was found between the model predictions and actual data from the field survey. Copyright © 2010 Elsevier Ltd. All rights reserved.
Three procedures for estimating erosion from construction areas
International Nuclear Information System (INIS)
Abt, S.R.; Ruff, J.F.
1978-01-01
Erosion from many mining and construction sites can lead to serious environmental pollution problems. Therefore, erosion management plans must be developed in order that the engineer may implement measures to control or eliminate excessive soil losses. To properly implement a management program, it is necessary to estimate potential soil losses from the time the project begins to beyond project completion. Three methodologies are presented which project the estimated soil losses due to sheet or rill erosion of water and are applicable to mining and construction areas. Furthermore, the three methods described are intended as indicators of the state-of-the-art in water erosion prediction. The procedures herein do not account for gully erosion, snowmelt erosion, wind erosion, freeze-thaw erosion or extensive flooding
AES, Automated Construction Cost Estimation System
International Nuclear Information System (INIS)
Holder, D.A.
1995-01-01
A - Description of program or function: AES (Automated Estimating System) enters and updates the detailed cost, schedule, contingency, and escalation information contained in a typical construction or other project cost estimates. It combines this information to calculate both un-escalated and escalated and cash flow values for the project. These costs can be reported at varying levels of detail. AES differs from previous versions in at least the following ways: The schedule is entered at the WBS-Participant, Activity level - multiple activities can be assigned to each WBS-Participant combination; the spending curve is defined at the schedule activity level and a weighing factor is defined which determines percentage of cost for the WBS-Participant applied to the schedule activity; Schedule by days instead of Fiscal Year/Quarter; Sales Tax is applied at the Line Item Level- a sales tax codes is selected to indicate Material, Large Single Item, or Professional Services; a 'data filter' has been added to allow user to define data the report is to be generated for. B - Method of solution: Average Escalation Rate: The average escalation for a Bill of is calculated in three steps. 1. A table of quarterly escalation factors is calculated based on the base fiscal year and quarter of the project entered in the estimate record and the annual escalation rates entered in the Standard Value File. 2. The percentage distribution of costs by quarter for the Bill of Material is calculated based on the schedule entered and the curve type. 3. The percent in each fiscal year and quarter in the distribution is multiplied by the escalation factor for the fiscal year and quarter. The sum of these results is the average escalation rate for that Bill of Material. Schedule by curve: The allocation of costs to specific time periods is dependent on three inputs, starting schedule date, ending schedule date, and the percentage of costs allocated to each quarter. Contingency Analysis: The
Development of a simple estimation tool for LMFBR construction cost
International Nuclear Information System (INIS)
Yoshida, Kazuo; Kinoshita, Izumi
1999-01-01
A simple tool for estimating the construction costs of liquid-metal-cooled fast breeder reactors (LMFBRs), 'Simple Cost' was developed in this study. Simple Cost is based on a new estimation formula that can reduce the amount of design data required to estimate construction costs. Consequently, Simple cost can be used to estimate the construction costs of innovative LMFBR concepts for which detailed design has not been carried out. The results of test calculation show that Simple Cost provides cost estimations equivalent to those obtained with conventional methods within the range of plant power from 325 to 1500 MWe. Sensitivity analyses for typical design parameters were conducted using Simple Cost. The effects of four major parameters - reactor vessel diameter, core outlet temperature, sodium handling area and number of secondary loops - on the construction costs of LMFBRs were evaluated quantitatively. The results show that the reduction of sodium handling area is particularly effective in reducing construction costs. (author)
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...
Estimating construction and demolition debris generation using a materials flow analysis approach.
Cochran, K M; Townsend, T G
2010-11-01
The magnitude and composition of a region's construction and demolition (C&D) debris should be understood when developing rules, policies and strategies for managing this segment of the solid waste stream. In the US, several national estimates have been conducted using a weight-per-construction-area approximation; national estimates using alternative procedures such as those used for other segments of the solid waste stream have not been reported for C&D debris. This paper presents an evaluation of a materials flow analysis (MFA) approach for estimating C&D debris generation and composition for a large region (the US). The consumption of construction materials in the US and typical waste factors used for construction materials purchasing were used to estimate the mass of solid waste generated as a result of construction activities. Debris from demolition activities was predicted from various historical construction materials consumption data and estimates of average service lives of the materials. The MFA approach estimated that approximately 610-78 × 10(6)Mg of C&D debris was generated in 2002. This predicted mass exceeds previous estimates using other C&D debris predictive methodologies and reflects the large waste stream that exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
Adaptive vehicle motion estimation and prediction
Zhao, Liang; Thorpe, Chuck E.
1999-01-01
Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.
A Semantics-Based Approach to Construction Cost Estimating
Niknam, Mehrdad
2015-01-01
A construction project requires collaboration of different organizations such as owner, designer, contractor, and resource suppliers. These organizations need to exchange information to improve their teamwork. Understanding the information created in other organizations requires specialized human resources. Construction cost estimating is one of…
Estimation of construction waste generation and management in Thailand.
Kofoworola, Oyeshola Femi; Gheewala, Shabbir H
2009-02-01
This study examines construction waste generation and management in Thailand. It is estimated that between 2002 and 2005, an average of 1.1 million tons of construction waste was generated per year in Thailand. This constitutes about 7.7% of the total amount of waste disposed in both landfills and open dumpsites annually during the same period. Although construction waste constitutes a major source of waste in terms of volume and weight, its management and recycling are yet to be effectively practiced in Thailand. Recently, the management of construction waste is being given attention due to its rapidly increasing unregulated dumping in undesignated areas, and recycling is being promoted as a method of managing this waste. If effectively implemented, its potential economic and social benefits are immense. It was estimated that between 70 and 4,000 jobs would have been created between 2002 and 2005, if all construction wastes in Thailand had been recycled. Additionally it would have contributed an average savings of about 3.0 x 10(5) GJ per year in the final energy consumed by the construction sector of the nation within the same period based on the recycling scenario analyzed. The current national integrated waste management plan could enhance the effective recycling of construction and demolition waste in Thailand when enforced. It is recommended that an inventory of all construction waste generated in the country be carried out in order to assess the feasibility of large scale recycling of construction and demolition waste.
Construction Worker Fatigue Prediction Model Based on System Dynamic
Directory of Open Access Journals (Sweden)
Wahyu Adi Tri Joko
2017-01-01
Full Text Available Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD. System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate the model, 93 construction workers whom worked in a high rise building construction projects, were used as case study. The result shows that excessive workload, working elevation and age, are the main factors lead to construction worker fatigue. Simulation result also shows that these factors can increase worker fatigue level to 21.2% times compared to normal condition. Beside predicting worker fatigue level this model can also be used as early warning system to prevent construction worker accident
Construction Worker Fatigue Prediction Model Based on System Dynamic
Wahyu Adi Tri Joko; Ayu Ratnawinanda Lila
2017-01-01
Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD). System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate...
Optimal design criteria - prediction vs. parameter estimation
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Budget estimates: Fiscal year 1994. Volume 2: Construction of facilities
1994-01-01
The Construction of Facilities (CoF) appropriation provides contractual services for the repair, rehabilitation, and modification of existing facilities; the construction of new facilities and the acquisition of related collateral equipment; the acquisition or condemnation of real property; environmental compliance and restoration activities; the design of facilities projects; and advanced planning related to future facilities needs. Fiscal year 1994 budget estimates are broken down according to facility location of project and by purpose.
Seo, Seongwon; Hwang, Yongwoo
1999-08-01
Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.
Construction-man hour estimation for nuclear power plants
International Nuclear Information System (INIS)
Paek, J.H.
1987-01-01
This study centers on a statistical analysis of the preliminary construction time, main construction time, and total construction man hours of nuclear power plants. The use of these econometric techniques allows the major man hour driving variables to be identified through multivariate analysis of time-series data on over 80 United States nuclear power plants. The analysis made in this study provides a clearer picture of the dynamic changes that have occurred in the man hours of these plants when compared to engineering estimates of man hours, and produces a tool that can be used to project nuclear power plant man hours
Seismic prediction ahead of tunnel constructions
Jetschny, S.; Bohlen, T.; Nil, D. D.; Giese, R.
2007-12-01
To increase safety and efficiency of tunnel constructions, online seismic exploration ahead of a tunnel can become a valuable tool. Within the \\it OnSite project founded by the BMBF (German Ministry of Education and Research) within \\it GeoTechnologien a new forward looking seismic imaging technique is developed to e.g. determine weak and water bearing zones ahead of the constructions. Our approach is based on the excitation and registration of \\it tunnel surface waves. These waves are excited at the tunnel face behind the cutter head of a tunnel boring machine and travel into drilling direction. Arriving at the front face they generate body waves (mainly S-waves) propagating further ahead. Reflected S-waves are back- converted into tunnel surface waves. For a theoretical description of the conversion process and for finding optimal acquisition geometries it is of importance to study the propagation characteristics of tunnel surface waves. 3D seismic finite difference modeling and analytic solutions of the wave equation in cylindric coordinates revealed that at higher frequencies, i.e. if the tunnel diameter is significantly larger than the wavelength of S-waves, these surface waves can be regarded as Rayleigh-waves circulating the tunnel. For smaller frequencies, i.e. when the S-wavelength approaches the tunnel diameter, the propagation characteristics of these surface waves are then similar to S- waves. Field measurements performed by the GeoForschungsZentrum Potsdam, Germany at the Gotthard Base Tunnel (Switzerland) show both effects, i.e. the propagation of Rayleigh- and body-wave like waves along the tunnel. To enhance our understanding of the excitation and propagation characteristics of tunnel surface waves the transition of Rayleigh to tube-waves waves is investigated both analytically and by numerical simulations.
Resource-estimation models and predicted discovery
International Nuclear Information System (INIS)
Hill, G.W.
1982-01-01
Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)
ANN Based Approach for Estimation of Construction Costs of Sports Fields
Directory of Open Access Journals (Sweden)
Michał Juszczyk
2018-01-01
Full Text Available Cost estimates are essential for the success of construction projects. Neural networks, as the tools of artificial intelligence, offer a significant potential in this field. Applying neural networks, however, requires respective studies due to the specifics of different kinds of facilities. This paper presents the proposal of an approach to the estimation of construction costs of sports fields which is based on neural networks. The general applicability of artificial neural networks in the formulated problem with cost estimation is investigated. An applicability of multilayer perceptron networks is confirmed by the results of the initial training of a set of various artificial neural networks. Moreover, one network was tailored for mapping a relationship between the total cost of construction works and the selected cost predictors which are characteristic of sports fields. Its prediction quality and accuracy were assessed positively. The research results legitimatize the proposed approach.
Probabilistic cost estimating of nuclear power plant construction projects
International Nuclear Information System (INIS)
Finch, W.C.; Perry, L.W.; Postula, F.D.
1978-01-01
This paper shows how to identify and isolate cost accounts by developing probability trees down to component levels as justified by value and cost uncertainty. Examples are given of the procedure for assessing uncertainty in all areas contributing to cost: design, factory equipment pricing, and field labor and materials. The method of combining these individual uncertainties is presented so that the cost risk can be developed for components, systems and the total plant construction project. Formats which enable management to use the probabilistic cost estimate information for business planning and risk control are illustrated. Topics considered include code estimate performance, cost allocation, uncertainty encoding, probabilistic cost distributions, and interpretation. Effective cost control of nuclear power plant construction projects requires insight into areas of greatest cost uncertainty and a knowledge of the factors which can cause costs to vary from the single value estimates. It is concluded that probabilistic cost estimating can provide the necessary assessment of uncertainties both as to the cause and the consequences
Star-sensor-based predictive Kalman filter for satelliteattitude estimation
Institute of Scientific and Technical Information of China (English)
林玉荣; 邓正隆
2002-01-01
A real-time attitude estimation algorithm, namely the predictive Kalman filter, is presented. This algorithm can accurately estimate the three-axis attitude of a satellite using only star sensor measurements. The implementation of the filter includes two steps: first, predicting the torque modeling error, and then estimating the attitude. Simulation results indicate that the predictive Kalman filter provides robust performance in the presence of both significant errors in the assumed model and in the initial conditions.
Construction of ontology augmented networks for protein complex prediction.
Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian
2013-01-01
Protein complexes are of great importance in understanding the principles of cellular organization and function. The increase in available protein-protein interaction data, gene ontology and other resources make it possible to develop computational methods for protein complex prediction. Most existing methods focus mainly on the topological structure of protein-protein interaction networks, and largely ignore the gene ontology annotation information. In this article, we constructed ontology augmented networks with protein-protein interaction data and gene ontology, which effectively unified the topological structure of protein-protein interaction networks and the similarity of gene ontology annotations into unified distance measures. After constructing ontology augmented networks, a novel method (clustering based on ontology augmented networks) was proposed to predict protein complexes, which was capable of taking into account the topological structure of the protein-protein interaction network, as well as the similarity of gene ontology annotations. Our method was applied to two different yeast protein-protein interaction datasets and predicted many well-known complexes. The experimental results showed that (i) ontology augmented networks and the unified distance measure can effectively combine the structure closeness and gene ontology annotation similarity; (ii) our method is valuable in predicting protein complexes and has higher F1 and accuracy compared to other competing methods.
Estimating Human Predictability From Mobile Sensor Data
DEFF Research Database (Denmark)
Jensen, Bjørn Sand; Larsen, Jakob Eg; Jensen, Kristian
2010-01-01
Quantification of human behavior is of prime interest in many applications ranging from behavioral science to practical applications like GSM resource planning and context-aware services. As proxies for humans, we apply multiple mobile phone sensors all conveying information about human behavior....... Using a recent, information theoretic approach it is demonstrated that the trajectories of individual sensors are highly predictable given complete knowledge of the infinite past. We suggest using a new approach to time scale selection which demonstrates that participants have even higher predictability...
A photogrammetric methodology for estimating construction and demolition waste composition
International Nuclear Information System (INIS)
Heck, H.H.; Reinhart, D.R.; Townsend, T.; Seibert, S.; Medeiros, S.; Cochran, K.; Chakrabarti, S.
2002-01-01
Manual sorting of construction, demolition, and renovation (C and D) waste is difficult and costly. A photogrammetric method has been developed to analyze the composition of C and D waste that eliminates the need for physical contact with the waste. The only field data collected is the weight and volume of the solid waste in the storage container and a photograph of each side of the waste pile, after it is dumped on the tipping floor. The methodology was developed and calibrated based on manual sorting studies at three different landfills in Florida, where the contents of twenty roll-off containers filled with C and D waste were sorted. The component classifications used were wood, concrete, paper products, drywall, metals, insulation, roofing, plastic, flooring, municipal solid waste, land-clearing waste, and other waste. Photographs of each side of the waste pile were taken with a digital camera and the pictures were analyzed on a computer using Photoshop software. Photoshop was used to divide the picture into eighty cells composed of ten columns and eight rows. The component distribution of each cell was estimated and results were summed to get a component distribution for the pile. Two types of distribution factors were developed that allow the component volumes and weights to be estimated. One set of distribution factors was developed to correct the volume distributions and the second set was developed to correct the weight distributions. The bulk density of each of the waste components were determined and used to convert waste volumes to weights. (author)
A photogrammetric methodology for estimating construction and demolition waste composition
Energy Technology Data Exchange (ETDEWEB)
Heck, H.H. [Florida Inst. of Technology, Dept. of divil Engineering, Melbourne, Florida (United States); Reinhart, D.R.; Townsend, T.; Seibert, S.; Medeiros, S.; Cochran, K.; Chakrabarti, S
2002-06-15
Manual sorting of construction, demolition, and renovation (C and D) waste is difficult and costly. A photogrammetric method has been developed to analyze the composition of C and D waste that eliminates the need for physical contact with the waste. The only field data collected is the weight and volume of the solid waste in the storage container and a photograph of each side of the waste pile, after it is dumped on the tipping floor. The methodology was developed and calibrated based on manual sorting studies at three different landfills in Florida, where the contents of twenty roll-off containers filled with C and D waste were sorted. The component classifications used were wood, concrete, paper products, drywall, metals, insulation, roofing, plastic, flooring, municipal solid waste, land-clearing waste, and other waste. Photographs of each side of the waste pile were taken with a digital camera and the pictures were analyzed on a computer using Photoshop software. Photoshop was used to divide the picture into eighty cells composed of ten columns and eight rows. The component distribution of each cell was estimated and results were summed to get a component distribution for the pile. Two types of distribution factors were developed that allow the component volumes and weights to be estimated. One set of distribution factors was developed to correct the volume distributions and the second set was developed to correct the weight distributions. The bulk density of each of the waste components were determined and used to convert waste volumes to weights. (author)
Directory of Open Access Journals (Sweden)
A. Mahmoodzadeh
2016-10-01
Full Text Available Ground condition and construction (excavation and support time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.
Early cost estimating for road construction projects using multiple regression techniques
Directory of Open Access Journals (Sweden)
Ibrahim Mahamid
2011-12-01
Full Text Available The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.
Prediction of RNA secondary structure using generalized centroid estimators.
Hamada, Michiaki; Kiryu, Hisanori; Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi
2009-02-15
Recent studies have shown that the methods for predicting secondary structures of RNAs on the basis of posterior decoding of the base-pairing probabilities has an advantage with respect to prediction accuracy over the conventionally utilized minimum free energy methods. However, there is room for improvement in the objective functions presented in previous studies, which are maximized in the posterior decoding with respect to the accuracy measures for secondary structures. We propose novel estimators which improve the accuracy of secondary structure prediction of RNAs. The proposed estimators maximize an objective function which is the weighted sum of the expected number of the true positives and that of the true negatives of the base pairs. The proposed estimators are also improved versions of the ones used in previous works, namely CONTRAfold for secondary structure prediction from a single RNA sequence and McCaskill-MEA for common secondary structure prediction from multiple alignments of RNA sequences. We clarify the relations between the proposed estimators and the estimators presented in previous works, and theoretically show that the previous estimators include additional unnecessary terms in the evaluation measures with respect to the accuracy. Furthermore, computational experiments confirm the theoretical analysis by indicating improvement in the empirical accuracy. The proposed estimators represent extensions of the centroid estimators proposed in Ding et al. and Carvalho and Lawrence, and are applicable to a wide variety of problems in bioinformatics. Supporting information and the CentroidFold software are available online at: http://www.ncrna.org/software/centroidfold/.
Uncertainty Estimates in Cold Critical Eigenvalue Predictions
International Nuclear Information System (INIS)
Karve, Atul A.; Moore, Brian R.; Mills, Vernon W.; Marrotte, Gary N.
2005-01-01
A recent cycle of a General Electric boiling water reactor performed two beginning-of-cycle local cold criticals. The eigenvalues estimated by the core simulator were 0.99826 and 1.00610. The large spread in them (= 0.00784) is a source of concern, and it is studied here. An analysis process is developed using statistical techniques, where first a transfer function relating the core observable Y (eigenvalue) to various factors (X's) is established. Engineering judgment is used to recognize the best candidates for X's. They are identified as power-weighted assembly k ∞ 's of selected assemblies around the withdrawn rods. These are a small subset of many X's that could potentially influence Y. However, the intention here is not to do a comprehensive study by accounting for all the X's. Rather, the scope is to demonstrate that the process developed is reasonable and to show its applicability to performing detailed studies. Variability in X's is obtained by perturbing nodal k ∞ 's since they directly influence the buckling term in the quasi-two-group diffusion equation model of the core simulator. Any perturbations introduced in them are bounded by standard well-established uncertainties. The resulting perturbations in the X's may not necessarily be directly correlated to physical attributes, but they encompass numerous biases and uncertainties credited to input and modeling uncertainties. The 'vital few' from the 'unimportant many' X's are determined, and then they are subgrouped according to assembly type, location, exposure, and control rod insertion. The goal is to study how the subgroups influence Y in order to have a better understanding of the variability observed in it
Directory of Open Access Journals (Sweden)
Yongho Ko
2017-04-01
Full Text Available Precise and accurate prediction models for duration and cost enable contractors to improve their decision making for effective resource management in terms of sustainability in construction. Previous studies have been limited to cost-based estimations, but this study focuses on a material-based progress management method. Cost-based estimations typically used in construction, such as the earned value method, rely on comparing the planned budget with the actual cost. However, accurately planning budgets requires analysis of many factors, such as the financial status of the sectors involved. Furthermore, there is a higher possibility of changes in the budget than in the total amount of material used during construction, which is deduced from the quantity take-off from drawings and specifications. Accordingly, this study proposes a material-based progress management methodology, which was developed using different predictive analysis models (regression, neural network, and auto-regressive moving average as well as datasets on material and labor, which can be extracted from daily work reports from contractors. A case study on actual datasets was conducted, and the results show that the proposed methodology can be efficiently used for progress management in construction.
48 CFR 36.203 - Government estimate of construction costs.
2010-10-01
... personnel whose official duties require knowledge of the estimate. An exception to this rule may be made... necessary to arrive at a fair and reasonable price. The overall amount of the Government's estimate shall...
On the estimation and testing of predictive panel regressions
Karabiyik, H.; Westerlund, Joakim; Narayan, Paresh
2016-01-01
Hjalmarsson (2010) considers an OLS-based estimator of predictive panel regressions that is argued to be mixed normal under very general conditions. In a recent paper, Westerlund et al. (2016) show that while consistent, the estimator is generally not mixed normal, which invalidates standard normal
Distributed estimation based on observations prediction in wireless sensor networks
Bouchoucha, Taha
2015-03-19
We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.
Directory of Open Access Journals (Sweden)
Othman Subhi Alshamrani
2017-03-01
Full Text Available The literature lacks in initial cost prediction models for college buildings, especially comparing costs of sustainable and conventional buildings. A multi-regression model was developed for conceptual initial cost estimation of conventional and sustainable college buildings in North America. RS Means was used to estimate the national average of construction costs for 2014, which was subsequently utilized to develop the model. The model could predict the initial cost per square feet with two structure types made of steel and concrete. The other predictor variables were building area, number of floors and floor height. The model was developed in three major stages, such as preliminary diagnostics on data quality, model development and validation. The developed model was successfully tested and validated with real-time data.
Power system dynamic state estimation using prediction based evolutionary technique
International Nuclear Information System (INIS)
Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan
2016-01-01
In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.
NUMERICAL AND ANALYTIC METHODS OF ESTIMATION BRIDGES’ CONSTRUCTIONS
Directory of Open Access Journals (Sweden)
Y. Y. Luchko
2010-03-01
Full Text Available In this article the numerical and analytical methods of calculation of the stressed-and-strained state of bridge constructions are considered. The task on increasing of reliability and accuracy of the numerical method and its solution by means of calculations in two bases are formulated. The analytical solution of the differential equation of deformation of a ferro-concrete plate under the action of local loads is also obtained.
Estimating diesel fuel consumption and carbon dioxide emissions from forest road construction
Dan Loeffler; Greg Jones; Nikolaus Vonessen; Sean Healey; Woodam Chung
2009-01-01
Forest access road construction is a necessary component of many on-the-ground forest vegetation treatment projects. However, the fuel energy requirements and associated carbon dioxide emissions from forest road construction are unknown. We present a method for estimating diesel fuel consumed and related carbon dioxide emissions from constructing forest roads using...
Predicting Software Projects Cost Estimation Based on Mining Historical Data
Najadat, Hassan; Alsmadi, Izzat; Shboul, Yazan
2012-01-01
In this research, a hybrid cost estimation model is proposed to produce a realistic prediction model that takes into consideration software project, product, process, and environmental elements. A cost estimation dataset is built from a large number of open source projects. Those projects are divided into three domains: communication, finance, and game projects. Several data mining techniques are used to classify software projects in terms of their development complexity. Data mining techniqu...
Estimation of construction and demolition waste using waste generation rates in Chennai, India.
Ram, V G; Kalidindi, Satyanarayana N
2017-06-01
A large amount of construction and demolition waste is being generated owing to rapid urbanisation in Indian cities. A reliable estimate of construction and demolition waste generation is essential to create awareness about this stream of solid waste among the government bodies in India. However, the required data to estimate construction and demolition waste generation in India are unavailable or not explicitly documented. This study proposed an approach to estimate construction and demolition waste generation using waste generation rates and demonstrated it by estimating construction and demolition waste generation in Chennai city. The demolition waste generation rates of primary materials were determined through regression analysis using waste generation data from 45 case studies. Materials, such as wood, electrical wires, doors, windows and reinforcement steel, were found to be salvaged and sold on the secondary market. Concrete and masonry debris were dumped in either landfills or unauthorised places. The total quantity of construction and demolition debris generated in Chennai city in 2013 was estimated to be 1.14 million tonnes. The proportion of masonry debris was found to be 76% of the total quantity of demolition debris. Construction and demolition debris forms about 36% of the total solid waste generated in Chennai city. A gross underestimation of construction and demolition waste generation in some earlier studies in India has also been shown. The methodology proposed could be utilised by government bodies, policymakers and researchers to generate reliable estimates of construction and demolition waste in other developing countries facing similar challenges of limited data availability.
Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey
Directory of Open Access Journals (Sweden)
Abdelrahman Osman Elfaki
2014-01-01
Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.
Uncertainty Estimation using Bootstrapped Kriging Predictions for Precipitation Isoscapes
Ma, C.; Bowen, G. J.; Vander Zanden, H.; Wunder, M.
2017-12-01
Isoscapes are spatial models representing the distribution of stable isotope values across landscapes. Isoscapes of hydrogen and oxygen in precipitation are now widely used in a diversity of fields, including geology, biology, hydrology, and atmospheric science. To generate isoscapes, geostatistical methods are typically applied to extend predictions from limited data measurements. Kriging is a popular method in isoscape modeling, but quantifying the uncertainty associated with the resulting isoscapes is challenging. Applications that use precipitation isoscapes to determine sample origin require estimation of uncertainty. Here we present a simple bootstrap method (SBM) to estimate the mean and uncertainty of the krigged isoscape and compare these results with a generalized bootstrap method (GBM) applied in previous studies. We used hydrogen isotopic data from IsoMAP to explore these two approaches for estimating uncertainty. We conducted 10 simulations for each bootstrap method and found that SBM results in more kriging predictions (9/10) compared to GBM (4/10). Prediction from SBM was closer to the original prediction generated without bootstrapping and had less variance than GBM. SBM was tested on different datasets from IsoMAP with different numbers of observation sites. We determined that predictions from the datasets with fewer than 40 observation sites using SBM were more variable than the original prediction. The approaches we used for estimating uncertainty will be compiled in an R package that is under development. We expect that these robust estimates of precipitation isoscape uncertainty can be applied in diagnosing the origin of samples ranging from various type of waters to migratory animals, food products, and humans.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea
Sawlan, Zaid A
2012-12-01
Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.
A consensus approach for estimating the predictive accuracy of dynamic models in biology.
Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Müller, Dirk; Balsa-Canto, Eva; Schmid, Joachim; Banga, Julio R
2015-04-01
Mathematical models that predict the complex dynamic behaviour of cellular networks are fundamental in systems biology, and provide an important basis for biomedical and biotechnological applications. However, obtaining reliable predictions from large-scale dynamic models is commonly a challenging task due to lack of identifiability. The present work addresses this challenge by presenting a methodology for obtaining high-confidence predictions from dynamic models using time-series data. First, to preserve the complex behaviour of the network while reducing the number of estimated parameters, model parameters are combined in sets of meta-parameters, which are obtained from correlations between biochemical reaction rates and between concentrations of the chemical species. Next, an ensemble of models with different parameterizations is constructed and calibrated. Finally, the ensemble is used for assessing the reliability of model predictions by defining a measure of convergence of model outputs (consensus) that is used as an indicator of confidence. We report results of computational tests carried out on a metabolic model of Chinese Hamster Ovary (CHO) cells, which are used for recombinant protein production. Using noisy simulated data, we find that the aggregated ensemble predictions are on average more accurate than the predictions of individual ensemble models. Furthermore, ensemble predictions with high consensus are statistically more accurate than ensemble predictions with large variance. The procedure provides quantitative estimates of the confidence in model predictions and enables the analysis of sufficiently complex networks as required for practical applications. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
International Nuclear Information System (INIS)
Dolgaya, A.A.; Uzdin, A.M.; Indeykin, A.V.
1993-01-01
The investigation object is the design amplitude of accelerograms, which are used in the evaluation of seismic stability of responsible structures, first and foremost, NPS. The amplitude level is established depending on the degree of responsibility of the structure and on the prevailing period of earthquake action on the construction site. The investigation procedure is based on statistical analysis of 310 earthquakes. At the first stage of statistical data-processing we established the correlation dependence of both the mathematical expectation and root-mean-square deviation of peak acceleration of the earthquake on its prevailing period. At the second stage the most suitable law of acceleration distribution about the mean was chosen. To determine of this distribution parameters, we specified the maximum conceivable acceleration, the excess of which is not allowed. Other parameters of distribution are determined according to statistical data. At the third stage the dependencies of design amplitude on the prevailing period of seismic effect for different structures and equipment were established. The obtained data made it possible to recommend to fix the level of safe-shutdown (SSB) and operating basis earthquakes (OBE) for objects of various responsibility categories when designing NPS. (author)
PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.
Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude
2015-04-27
Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.
Chen, Ho-Wen; Chang, Ni-Bin
2002-08-01
Prediction of construction cost of wastewater treatment facilities could be influential for the economic feasibility of various levels of water pollution control programs. However, construction cost estimation is difficult to precisely evaluate in an uncertain environment and measured quantities are always burdened with different types of cost structures. Therefore, an understanding of the previous development of wastewater treatment plants and of the related construction cost structures of those facilities becomes essential for dealing with an effective regional water pollution control program. But deviations between the observed values and the estimated values are supposed to be due to measurement errors only in the conventional regression models. The inherent uncertainties of the underlying cost structure, where the human estimation is influential, are rarely explored. This paper is designed to recast a well-known problem of construction cost estimation for both domestic and industrial wastewater treatment plants via a comparative framework. Comparisons were made for three technologies of regression analyses, including the conventional least squares regression method, the fuzzy linear regression method, and the newly derived fuzzy goal regression method. The case study, incorporating a complete database with 48 domestic wastewater treatment plants and 29 industrial wastewater treatment plants being collected in Taiwan, implements such a cost estimation procedure in an uncertain environment. Given that the fuzzy structure in regression estimation may account for the inherent human complexity in cost estimation, the fuzzy goal regression method does exhibit more robust results in terms of some criteria. Moderate economy of scale exists in constructing both the domestic and industrial wastewater treatment plants. Findings indicate that the optimal size of a domestic wastewater treatment plant is approximately equivalent to 15,000 m3/day (CMD) and higher in Taiwan
MEASURING INSTRUMENT CONSTRUCTION AND VALIDATION IN ESTIMATING UNICYCLING SKILL LEVEL
Directory of Open Access Journals (Sweden)
Ivan Granić
2012-09-01
Full Text Available Riding the unicycle presupposes the knowledge of the set of elements which describe motoric skill, or just part of that set with which we could measure the level of that knowledge. Testing and evaluation of the elements is time consuming. In order to design a unique, composite measuring instrument, to facilitate the evaluation of the initial level of unicycling skill, we tested 17 recreative subjects who were learning to ride the unicycle in 15 hours of training, without any previous knowledge or experience what was measured before the beginning of the training. At the beginning and at the end of the training they were tested with the set of the 12 riding elements test that was carried out to record only successful attempts, followed by unique SLALOM test which include previously tested elements. It was found that the unique SLALOM test has good metric features and a high regression coefficient showed that the SLALOM could be used instead of the 12 elements of unicycle riding skill, and it could be used as a uniform test to evaluate learned or existing knowledge. Because of its simplicity in terms of action and simultaneous testing of more subjects, the newly constructed test could be used in evaluating the unicycling recreational level, but also for monitoring and programming transformation processes to develop the motor skills of riding of unicycle. Because of its advantages, it is desirable to include unicycling in the educational processes of learning new motor skills, which can be evaluated by the results of this research. The obtained results indicate that the unicycle should be seriously consider as a training equipment to “refresh” or expand the recreational programs, without any fear that it is just for special people. Namely, it was shown that the previously learned motor skills (skiing, roller-skating, and cycling had no effect on the results of final testing.
Predictive framework for estimating exposure of birds to pharmaceuticals.
Bean, Thomas G; Arnold, Kathryn E; Lane, Julie M; Bergström, Ed; Thomas-Oates, Jane; Rattner, Barnett A; Boxall, Alistair B A
2017-09-01
We present and evaluate a framework for estimating concentrations of pharmaceuticals over time in wildlife feeding at wastewater treatment plants (WWTPs). The framework is composed of a series of predictive steps involving the estimation of pharmaceutical concentration in wastewater, accumulation into wildlife food items, and uptake by wildlife with subsequent distribution into, and elimination from, tissues. Because many pharmacokinetic parameters for wildlife are unavailable for the majority of drugs in use, a read-across approach was employed using either rodent or human data on absorption, distribution, metabolism, and excretion. Comparison of the different steps in the framework against experimental data for the scenario where birds are feeding on a WWTP contaminated with fluoxetine showed that estimated concentrations in wastewater treatment works were lower than measured concentrations; concentrations in food could be reasonably estimated if experimental bioaccumulation data are available; and read-across from rodent data worked better than human to bird read-across. The framework provides adequate predictions of plasma concentrations and of elimination behavior in birds but yields poor predictions of distribution in tissues. The approach holds promise, but it is important that we improve our understanding of the physiological similarities and differences between wild birds and domesticated laboratory mammals used in pharmaceutical efficacy/safety trials, so that the wealth of data available can be applied more effectively in ecological risk assessments. Environ Toxicol Chem 2017;36:2335-2344. © 2017 SETAC. © 2017 SETAC.
Predictive framework for estimating exposure of birds to pharmaceuticals
Bean, Thomas G.; Arnold, Kathryn E.; Lane, Julie M.; Bergström, Ed; Thomas-Oates, Jane; Rattner, Barnett A.; Boxall, Allistair B.A.
2017-01-01
We present and evaluate a framework for estimating concentrations of pharmaceuticals over time in wildlife feeding at wastewater treatment plants (WWTPs). The framework is composed of a series of predictive steps involving the estimation of pharmaceutical concentration in wastewater, accumulation into wildlife food items, and uptake by wildlife with subsequent distribution into, and elimination from, tissues. Because many pharmacokinetic parameters for wildlife are unavailable for the majority of drugs in use, a read-across approach was employed using either rodent or human data on absorption, distribution, metabolism, and excretion. Comparison of the different steps in the framework against experimental data for the scenario where birds are feeding on a WWTP contaminated with fluoxetine showed that estimated concentrations in wastewater treatment works were lower than measured concentrations; concentrations in food could be reasonably estimated if experimental bioaccumulation data are available; and read-across from rodent data worked better than human to bird read-across. The framework provides adequate predictions of plasma concentrations and of elimination behavior in birds but yields poor predictions of distribution in tissues. The approach holds promise, but it is important that we improve our understanding of the physiological similarities and differences between wild birds and domesticated laboratory mammals used in pharmaceutical efficacy/safety trials, so that the wealth of data available can be applied more effectively in ecological risk assessments.
Directory of Open Access Journals (Sweden)
Kadhim Raheem
2015-02-01
Full Text Available This research will cover different aspects of estimating process of construction work in a desert area. The inherent difficulties which accompany the cost estimating of the construction works in desert environment in a developing country, will stem from the limited information available, resources scarcity, low level of skilled workers, the prevailing severe weather conditions and many others, which definitely don't provide a fair, reliable and accurate estimation. This study tries to present unit price to estimate the cost in preliminary phase of a project. Estimations are supported by developing mathematical equations based on the historical data of maintenance, new construction of managerial and school projects. Meanwhile, the research has determined the percentage of project items, in such a remote environment. Estimation equations suitable for remote areas have been formulated. Moreover, a procedure for unite price calculation is concluded.
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
Construction and Demolition Debris 2014 US Final Disposition Estimates Using the CDDPath Method
U.S. Environmental Protection Agency — Estimates of the final amount and final disposition of materials generated in the Construction and Demolition waste stream measured in total mass of each material....
Seismic prediction ahead of tunnel construction using Rayleigh-waves
Jetschny, Stefan; De Nil, Denise; Bohlen, Thomas
2008-01-01
To increase safety and efficiency of tunnel constructions, online seismic exploration ahead of a tunnel can become a valuable tool. We developed a new forward looking seismic imaging technique e.g. to determine weak and water bearing zones ahead of the constructions. Our approach is based on the excitation and registration of tunnel surface-waves. These waves are excited at the tunnel face behind the cutter head of a tunnel boring machine and travel into drilling direction. Arriving at the fr...
Using prediction markets to estimate the reproducibility of scientific research
Dreber, Anna; Pfeiffer, Thomas; Almenberg, Johan; Isaksson, Siri; Wilson, Brad; Chen, Yiling; Nosek, Brian A.; Johannesson, Magnus
2015-01-01
Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants’ individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a “statistically significant” finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications. PMID:26553988
Using prediction markets to estimate the reproducibility of scientific research.
Dreber, Anna; Pfeiffer, Thomas; Almenberg, Johan; Isaksson, Siri; Wilson, Brad; Chen, Yiling; Nosek, Brian A; Johannesson, Magnus
2015-12-15
Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants' individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a "statistically significant" finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications.
Estimation and prediction under local volatility jump-diffusion model
Kim, Namhyoung; Lee, Younhee
2018-02-01
Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.
Electron band theory predictions and the construction of phase diagrams
International Nuclear Information System (INIS)
Watson, R.E.; Bennett, L.H.; Davenport, J.W.; Weinert, M.
1985-01-01
The a priori theory of metals is yielding energy results which are relevant to the construction of phase diagrams - to the solution phases as well as to line compounds. There is a wide range in the rigor of the calculations currently being done and this is discussed. Calculations for the structural stabilities (fcc vs bcc vs hcp) of the elemental metals, quantities which are employed in the constructs of the terminal phases, are reviewed and shown to be inconsistent with the values currently employed in such constructs (also see Miodownik elsewhere in this volume). Finally, as an example, the calculated heats of formation are compared with experiment for PtHf, IrTa and OsW, three compounds with the same electron to atom ratio but different bonding properties
A Model Suggestion to Predict Leverage Ratio for Construction Projects
Özlem Tüz; Şafak Ebesek
2013-01-01
Due to the nature, construction is an industry with high uncertainty and risk. Construction industry carries high leverage ratios. Firms with low equities work in big projects through progress payment system, but in this case, even a small negative in the planned cash flows constitute a major risk for the company.The use of leverage, with a small investment to achieve profit targets large-scale, high-profit, but also brings a high risk with it. Investors may lose all or the portion of th...
A Model Suggestion to Predict Leverage Ratio for Construction Projects
Directory of Open Access Journals (Sweden)
Özlem Tüz
2013-12-01
Full Text Available Due to the nature, construction is an industry with high uncertainty and risk. Construction industry carries high leverage ratios. Firms with low equities work in big projects through progress payment system, but in this case, even a small negative in the planned cash flows constitute a major risk for the company.The use of leverage, with a small investment to achieve profit targets large-scale, high-profit, but also brings a high risk with it. Investors may lose all or the portion of the money. In this study, monitoring and measuring of the leverage ratio because of the displacement in cash inflows of construction projects which uses high leverage and low cash to do business in the sector is targeted. Cash need because of drifting the cash inflows may be seen due to the model. Work should be done in the early stages of the project with little capital but in the later stages, rapidly growing capital need arises.The values obtained from the model may be used to supply the capital held in the right time by anticipating the risks because of the delay in cashflow of construction projects which uses high leverage ratio.
Estimating the decomposition of predictive information in multivariate systems
Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele
2015-03-01
In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.
Uncertainty estimation and risk prediction in air quality
International Nuclear Information System (INIS)
Garaud, Damien
2011-01-01
This work is about uncertainty estimation and risk prediction in air quality. Firstly, we build a multi-model ensemble of air quality simulations which can take into account all uncertainty sources related to air quality modeling. Ensembles of photochemical simulations at continental and regional scales are automatically generated. Then, these ensemble are calibrated with a combinatorial optimization method. It selects a sub-ensemble which is representative of uncertainty or shows good resolution and reliability for probabilistic forecasting. This work shows that it is possible to estimate and forecast uncertainty fields related to ozone and nitrogen dioxide concentrations or to improve the reliability of threshold exceedance predictions. The approach is compared with Monte Carlo simulations, calibrated or not. The Monte Carlo approach appears to be less representative of the uncertainties than the multi-model approach. Finally, we quantify the observational error, the representativeness error and the modeling errors. The work is applied to the impact of thermal power plants, in order to quantify the uncertainty on the impact estimates. (author) [fr
Chapter 16 - Predictive Analytics for Comprehensive Energy Systems State Estimation
Energy Technology Data Exchange (ETDEWEB)
Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yang, Rui [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Jie [University of Texas at Dallas; Weng, Yang [Arizona State University
2017-12-01
Energy sustainability is a subject of concern to many nations in the modern world. It is critical for electric power systems to diversify energy supply to include systems with different physical characteristics, such as wind energy, solar energy, electrochemical energy storage, thermal storage, bio-energy systems, geothermal, and ocean energy. Each system has its own range of control variables and targets. To be able to operate such a complex energy system, big-data analytics become critical to achieve the goal of predicting energy supplies and consumption patterns, assessing system operation conditions, and estimating system states - all providing situational awareness to power system operators. This chapter presents data analytics and machine learning-based approaches to enable predictive situational awareness of the power systems.
Directory of Open Access Journals (Sweden)
Yoonseok Shin
2015-01-01
Full Text Available Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.
Villoria Sáez, Paola; del Río Merino, Mercedes; Porras-Amores, César
2012-02-01
The management planning of construction and demolition (C&D) waste uses a single indicator which does not provide enough detailed information. Therefore the determination and implementation of other innovative and precise indicators should be determined. The aim of this research work is to improve existing C&D waste quantification tools in the construction of new residential buildings in Spain. For this purpose, several housing projects were studied to determine an estimation of C&D waste generated during their construction process. This paper determines the values of three indicators to estimate the generation of C&D waste in new residential buildings in Spain, itemizing types of waste and construction stages. The inclusion of two more accurate indicators, in addition to the global one commonly in use, provides a significant improvement in C&D waste quantification tools and management planning.
Directory of Open Access Journals (Sweden)
Jiangang Liu
Full Text Available Toxicogenomics promises to aid in predicting adverse effects, understanding the mechanisms of drug action or toxicity, and uncovering unexpected or secondary pharmacology. However, modeling adverse effects using high dimensional and high noise genomic data is prone to over-fitting. Models constructed from such data sets often consist of a large number of genes with no obvious functional relevance to the biological effect the model intends to predict that can make it challenging to interpret the modeling results. To address these issues, we developed a novel algorithm, Predictive Power Estimation Algorithm (PPEA, which estimates the predictive power of each individual transcript through an iterative two-way bootstrapping procedure. By repeatedly enforcing that the sample number is larger than the transcript number, in each iteration of modeling and testing, PPEA reduces the potential risk of overfitting. We show with three different cases studies that: (1 PPEA can quickly derive a reliable rank order of predictive power of individual transcripts in a relatively small number of iterations, (2 the top ranked transcripts tend to be functionally related to the phenotype they are intended to predict, (3 using only the most predictive top ranked transcripts greatly facilitates development of multiplex assay such as qRT-PCR as a biomarker, and (4 more importantly, we were able to demonstrate that a small number of genes identified from the top-ranked transcripts are highly predictive of phenotype as their expression changes distinguished adverse from nonadverse effects of compounds in completely independent tests. Thus, we believe that the PPEA model effectively addresses the over-fitting problem and can be used to facilitate genomic biomarker discovery for predictive toxicology and drug responses.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Arabzadeh, Vida; Niaki, S. T. A.; Arabzadeh, Vahid
2017-10-01
One of the most important processes in the early stages of construction projects is to estimate the cost involved. This process involves a wide range of uncertainties, which make it a challenging task. Because of unknown issues, using the experience of the experts or looking for similar cases are the conventional methods to deal with cost estimation. The current study presents data-driven methods for cost estimation based on the application of artificial neural network (ANN) and regression models. The learning algorithms of the ANN are the Levenberg-Marquardt and the Bayesian regulated. Moreover, regression models are hybridized with a genetic algorithm to obtain better estimates of the coefficients. The methods are applied in a real case, where the input parameters of the models are assigned based on the key issues involved in a spherical tank construction. The results reveal that while a high correlation between the estimated cost and the real cost exists; both ANNs could perform better than the hybridized regression models. In addition, the ANN with the Levenberg-Marquardt learning algorithm (LMNN) obtains a better estimation than the ANN with the Bayesian-regulated learning algorithm (BRNN). The correlation between real data and estimated values is over 90%, while the mean square error is achieved around 0.4. The proposed LMNN model can be effective to reduce uncertainty and complexity in the early stages of the construction project.
Protein construct storage: Bayesian variable selection and prediction with mixtures.
Clyde, M A; Parmigiani, G
1998-07-01
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.
Risk Consideration and Cost Estimation in Construction Projects Using Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
Claudius A. Peleskei
2015-06-01
Full Text Available Construction projects usually involve high investments. It is, therefore, a risky adventure for companies as actual costs of construction projects nearly always exceed the planed scenario. This is due to the various risks and the large uncertainty existing within this industry. Determination and quantification of risks and their impact on project costs within the construction industry is described to be one of the most difficult areas. This paper analyses how the cost of construction projects can be estimated using Monte Carlo Simulation. It investigates if the different cost elements in a construction project follow a specific probability distribution. The research examines the effect of correlation between different project costs on the result of the Monte Carlo Simulation. The paper finds out that Monte Carlo Simulation can be a helpful tool for risk managers and can be used for cost estimation of construction projects. The research has shown that cost distributions are positively skewed and cost elements seem to have some interdependent relationships.
Wu, Cai; Li, Liang
2018-05-15
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.
Cost estimation using ministerial regulation of public work no. 11/2013 in construction projects
Arumsari, Putri; Juliastuti; Khalifah Al'farisi, Muhammad
2017-12-01
One of the first tasks in starting a construction project is to estimate the total cost of building a project. In Indonesia there are several standards that are used to calculate the cost estimation of a project. One of the standards used in based on the Ministerial Regulation of Public Work No. 11/2013. However in a construction project, contractor often has their own cost estimation based on their own calculation. This research aimed to compare the construction project total cost using calculation based on the Ministerial Regulation of Public Work No. 11/2013 against the contractor’s calculation. Two projects were used as case study to compare the results. The projects were a 4 storey building located in Pantai Indah Kapuk area (West Jakarta) and a warehouse located in Sentul (West Java) which was built by 2 different contractors. The cost estimation from both contractors’ calculation were compared to the one based on the Ministerial Regulation of Public Work No. 11/2013. It is found that there were differences between the two calculation around 1.80 % - 3.03% in total cost, in which the cost estimation based on Ministerial Regulation was higher than the contractors’ calculations.
Al-Mudhafar, W. J.
2013-12-01
Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly
International Nuclear Information System (INIS)
Simion, G.; Sciacca, F.; Claiborne, E.; Watlington, B.; Riordan, B.; McLaughlin, M.
1988-05-01
This report represents a validation study of the cost methodologies and quantitative factors derived in Labor Productivity Adjustment Factors and Generic Methodology for Estimating the Labor Cost Associated with the Removal of Hardware, Materials, and Structures From Nuclear Power Plants. This cost methodology was developed to support NRC analysts in determining generic estimates of removal, installation, and total labor costs for construction-related activities at nuclear generating stations. In addition to the validation discussion, this report reviews the generic cost analysis methodology employed. It also discusses each of the individual cost factors used in estimating the costs of physical modifications at nuclear power plants. The generic estimating approach presented uses the /open quotes/greenfield/close quotes/ or new plant construction installation costs compiled in the Energy Economic Data Base (EEDB) as a baseline. These baseline costs are then adjusted to account for labor productivity, radiation fields, learning curve effects, and impacts on ancillary systems or components. For comparisons of estimated vs actual labor costs, approximately four dozen actual cost data points (as reported by 14 nuclear utilities) were obtained. Detailed background information was collected on each individual data point to give the best understanding possible so that the labor productivity factors, removal factors, etc., could judiciously be chosen. This study concludes that cost estimates that are typically within 40% of the actual values can be generated by prudently using the methodologies and cost factors investigated herein
Estimation of wind erosion from construction of a railway in arid Northwest China
Directory of Open Access Journals (Sweden)
Benli Liu
2017-06-01
Full Text Available A state-of-the-art wind erosion simulation model, the Wind Erosion Prediction System and the United States Environmental Protection Agency's AP 42 emission factors formula, were combined together to evaluate wind-blown dust emissions from various construction units from a railway construction project in the dry Gobi land in Northwest China. The influence of the climatic factors: temperature, precipitation, wind speed and direction, soil condition, protective measures, and construction disturbance were taken into account. Driven by daily and sub-daily climate data and using specific detailed management files, the process-based WEPS model was able to express the beginning, active, and ending phases of construction, as well as the degree of disturbance for the entire scope of a construction project. The Lanzhou-Xinjiang High-speed Railway was selected as a representative study because of the diversities of different climates, soil, and working schedule conditions that could be analyzed. Wind erosion from different working units included the building of roadbeds, bridges, plants, temporary houses, earth spoil and barrow pit areas, and vehicle transportation were calculated. The total wind erosion emissions, 7406 t, for the first construction area of section LXS-15 with a 14.877 km length was obtained for quantitative analysis. The method used is applicable for evaluating wind erosion from other complex surface disturbance projects.
Estimating confidence intervals in predicted responses for oscillatory biological models.
St John, Peter C; Doyle, Francis J
2013-07-29
The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently
Error estimation for CFD aeroheating prediction under rarefied flow condition
Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian
2014-12-01
Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.
Predictive value and construct validity of the work functioning screener-healthcare (WFS-H)
Boezeman, Edwin J.; Nieuwenhuijsen, Karen; Sluiter, Judith K.
2016-01-01
Objectives: To test the predictive value and convergent construct validity of a 6-item work functioning screener (WFS-H). Methods: Healthcare workers (249 nurses) completed a questionnaire containing the work functioning screener (WFS-H) and a work functioning instrument (NWFQ) measuring the following: cognitive aspects of task execution and general incidents, avoidance behavior, conflicts and irritation with colleagues, impaired contact with patients and their family, and level of energy and motivation. Productivity and mental health were also measured. Negative and positive predictive values, AUC values, and sensitivity and specificity were calculated to examine the predictive value of the screener. Correlation analysis was used to examine the construct validity. Results: The screener had good predictive value, since the results showed that a negative screener score is a strong indicator of work functioning not hindered by mental health problems (negative predictive values: 94%-98%; positive predictive values: 21%-36%; AUC:.64-.82; sensitivity: 42%-76%; and specificity 85%-87%). The screener has good construct validity due to moderate, but significant (pvalue and good construct validity. Its score offers occupational health professionals a helpful preliminary insight into the work functioning of healthcare workers. PMID:27010085
Predictive value and construct validity of the work functioning screener-healthcare (WFS-H).
Boezeman, Edwin J; Nieuwenhuijsen, Karen; Sluiter, Judith K
2016-05-25
To test the predictive value and convergent construct validity of a 6-item work functioning screener (WFS-H). Healthcare workers (249 nurses) completed a questionnaire containing the work functioning screener (WFS-H) and a work functioning instrument (NWFQ) measuring the following: cognitive aspects of task execution and general incidents, avoidance behavior, conflicts and irritation with colleagues, impaired contact with patients and their family, and level of energy and motivation. Productivity and mental health were also measured. Negative and positive predictive values, AUC values, and sensitivity and specificity were calculated to examine the predictive value of the screener. Correlation analysis was used to examine the construct validity. The screener had good predictive value, since the results showed that a negative screener score is a strong indicator of work functioning not hindered by mental health problems (negative predictive values: 94%-98%; positive predictive values: 21%-36%; AUC:.64-.82; sensitivity: 42%-76%; and specificity 85%-87%). The screener has good construct validity due to moderate, but significant (ppredictive value and good construct validity. Its score offers occupational health professionals a helpful preliminary insight into the work functioning of healthcare workers.
Prediction equation for estimating total daily energy requirements of special operations personnel.
Barringer, N D; Pasiakos, S M; McClung, H L; Crombie, A P; Margolis, L M
2018-01-01
Special Operations Forces (SOF) engage in a variety of military tasks with many producing high energy expenditures, leading to undesired energy deficits and loss of body mass. Therefore, the ability to accurately estimate daily energy requirements would be useful for accurate logistical planning. Generate a predictive equation estimating energy requirements of SOF. Retrospective analysis of data collected from SOF personnel engaged in 12 different SOF training scenarios. Energy expenditure and total body water were determined using the doubly-labeled water technique. Physical activity level was determined as daily energy expenditure divided by resting metabolic rate. Physical activity level was broken into quartiles (0 = mission prep, 1 = common warrior tasks, 2 = battle drills, 3 = specialized intense activity) to generate a physical activity factor (PAF). Regression analysis was used to construct two predictive equations (Model A; body mass and PAF, Model B; fat-free mass and PAF) estimating daily energy expenditures. Average measured energy expenditure during SOF training was 4468 (range: 3700 to 6300) Kcal·d- 1 . Regression analysis revealed that physical activity level ( r = 0.91; P plan appropriate feeding regimens to meet SOF nutritional requirements across their mission profile.
DEFF Research Database (Denmark)
Salling, Kim Bang; Leleur, Steen
2014-01-01
For decades researchers have claimedthat particularly demand forecasts and construction cost estimations are assigned with/affected by a large degree of uncertainty. Massively, articles,research documents and reports agree that there exists a tendencytowards underestimating the costs...... in demand and cost estimations and hence the evaluation of transport infrastructure projects. Currently, research within this area is scarce and scattered with no commonagreement on how to embed and operationalise the huge amount of empiricaldata that exist within the frame of Optimism Bias. Therefore...... convertingdeterministic beneﬁt-cost ratios (BCRs) into stochasticinterval results. A new data collection (2009–2013) forms the empirical basis for any risk simulation embeddedwithin the so-calledUP database (UNITE project database),revealing the inaccuracy of both construction costs and demandforecasts. Accordingly...
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived
Distributed estimation based on observations prediction in wireless sensor networks
Bouchoucha, Taha; Ahmed, Mohammed F A; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2015-01-01
We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process
International Nuclear Information System (INIS)
Jeon, Woo Soo; Song, Ji Ho
2001-01-01
An expert system for estimation of fatigue properties from simple tensile data of material is developed, considering nearly all important estimation methods proposed so far, i.e., 7 estimation methods. The expert system is developed to utilize for the case of only hardness data available. The knowledge base is constructed with production rules and frames using an expert system shell, UNIK. Forward chaining is employed as a reasoning method. The expert system has three functions including the function to update the knowledge base. The performance of the expert system is tested using the 54 ε-N curves consisting of 381 ε-N data points obtained for 22 materials. It is found that the expert system developed has excellent performance especially for steel materials, and reasonably good for aluminum alloys
Effects of uncertainty in model predictions of individual tree volume on large area volume estimates
Ronald E. McRoberts; James A. Westfall
2014-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...
Bragg peak prediction from quantitative proton computed tomography using different path estimates
International Nuclear Information System (INIS)
Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A
2011-01-01
This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.
Bragg peak prediction from quantitative proton computed tomography using different path estimates
Energy Technology Data Exchange (ETDEWEB)
Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A, E-mail: tome@humonc.wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705 (United States)
2011-02-07
This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of {approx}0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.
Bragg peak prediction from quantitative proton computed tomography using different path estimates
Wang, Dongxu; Mackie, T Rockwell
2015-01-01
This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472
Oil and gas pipeline construction cost analysis and developing regression models for cost estimation
Thaduri, Ravi Kiran
In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.
The quality estimation of exterior wall’s and window filling’s construction design
Saltykov, Ivan; Bovsunovskaya, Maria
2017-10-01
The article reveals the term of “artificial envelope” in dwelling building. Authors offer a complex multifactorial approach to the design quality estimation of external fencing structures, which is based on various parameters impact. These referred parameters are: functional, exploitation, cost, and also, the environmental index is among them. The quality design index Qк is inputting for the complex characteristic of observed above parameters. The mathematical relation of this index from these parameters is the target function for the quality design estimation. For instance, the article shows the search of optimal variant for wall and window designs in small, middle and large square dwelling premises of economic class buildings. The graphs of target function single parameters are expressed for the three types of residual chamber’s dimensions. As a result of the showing example, there is a choice of window opening’s dimensions, which make the wall’s and window’s constructions properly correspondent to the producible complex requirements. The authors reveal the comparison of recommended window filling’s square in accordance with the building standards, and the square, due to the finding of the optimal variant of the design quality index. The multifactorial approach for optimal design searching, which is mentioned in this article, can be used in consideration of various construction elements of dwelling buildings in accounting of suitable climate, social and economic construction area features.
Wind gust estimation by combining numerical weather prediction model and statistical post-processing
Patlakas, Platon; Drakaki, Eleni; Galanis, George; Spyrou, Christos; Kallos, George
2017-04-01
The continuous rise of off-shore and near-shore activities as well as the development of structures, such as wind farms and various offshore platforms, requires the employment of state-of-the-art risk assessment techniques. Such analysis is used to set the safety standards and can be characterized as a climatologically oriented approach. Nevertheless, a reliable operational support is also needed in order to minimize cost drawbacks and human danger during the construction and the functioning stage as well as during maintenance activities. One of the most important parameters for this kind of analysis is the wind speed intensity and variability. A critical measure associated with this variability is the presence and magnitude of wind gusts as estimated in the reference level of 10m. The latter can be attributed to different processes that vary among boundary-layer turbulence, convection activities, mountain waves and wake phenomena. The purpose of this work is the development of a wind gust forecasting methodology combining a Numerical Weather Prediction model and a dynamical statistical tool based on Kalman filtering. To this end, the parameterization of Wind Gust Estimate method was implemented to function within the framework of the atmospheric model SKIRON/Dust. The new modeling tool combines the atmospheric model with a statistical local adaptation methodology based on Kalman filters. This has been tested over the offshore west coastline of the United States. The main purpose is to provide a useful tool for wind analysis and prediction and applications related to offshore wind energy (power prediction, operation and maintenance). The results have been evaluated by using observational data from the NOAA's buoy network. As it was found, the predicted output shows a good behavior that is further improved after the local adjustment post-process.
Construction of prediction intervals for Palmer Drought Severity Index using bootstrap
Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan
2018-04-01
In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.
Estimating the value of public construction works in Poland and Czech Republic
Directory of Open Access Journals (Sweden)
Edyta Plebankiewicz
2016-07-01
Full Text Available The article outlines the legislation concerning the methodology of estimating the value of works in Poland and the Czech Republic. In both countries it is necessary for the public investor to respect the law governing public procurement, which defines the structure of compulsory documents needed for the tender documentation, but not directly the way of their preparation. In both countries, though, there exist model proceeding schedules for the calculation of the value of a public procurement for construction works. To illustrate and compare the calculation methods a sample calculation of the procurement value is presented for a selected thermal efficiency improvement project.
Directory of Open Access Journals (Sweden)
Wu Chi-Yeh
2010-01-01
Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G
Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea
Sawlan, Zaid A
2012-01-01
parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while
Construction of risk prediction model of type 2 diabetes mellitus based on logistic regression
Directory of Open Access Journals (Sweden)
Li Jian
2017-01-01
Full Text Available Objective: to construct multi factor prediction model for the individual risk of T2DM, and to explore new ideas for early warning, prevention and personalized health services for T2DM. Methods: using logistic regression techniques to screen the risk factors for T2DM and construct the risk prediction model of T2DM. Results: Male’s risk prediction model logistic regression equation: logit(P=BMI × 0.735+ vegetables × (−0.671 + age × 0.838+ diastolic pressure × 0.296+ physical activity× (−2.287 + sleep ×(−0.009 +smoking ×0.214; Female’s risk prediction model logistic regression equation: logit(P=BMI ×1.979+ vegetables× (−0.292 + age × 1.355+ diastolic pressure× 0.522+ physical activity × (−2.287 + sleep × (−0.010.The area under the ROC curve of male was 0.83, the sensitivity was 0.72, the specificity was 0.86, the area under the ROC curve of female was 0.84, the sensitivity was 0.75, the specificity was 0.90. Conclusion: This study model data is from a compared study of nested case, the risk prediction model has been established by using the more mature logistic regression techniques, and the model is higher predictive sensitivity, specificity and stability.
Kassavou, Aikaterini; Turner, Andrew; Hamborg, Thomas; French, David P
2014-07-01
Little is known about the processes and factors that account for maintenance, with several theories existing that have not been subject to many empirical tests. The aim of this study was to test how well theoretical constructs derived from the Health Action Process Approach, Rothman's theory of maintenance, and Verplanken's approach to habitual behavior predicted maintenance of attendance at walking groups. 114 participants, who had already attended walking groups in the community for at least 3 months, completed a questionnaire assessing theoretical constructs regarding maintenance. An objective assessment of attendance over the subsequent 3 months was gained. Multilevel modeling was used to predict maintenance, controlling for clustering within walking groups. Recovery self-efficacy predicted maintenance, even after accounting for clustering. Satisfaction with social outcomes, satisfaction with health outcomes, and overall satisfaction predicted maintenance, but only satisfaction with health outcomes significantly predicted maintenance after accounting for clustering. Self-reported habitual behavior did not predict maintenance despite mean previous attendance being 20.7 months. Recovery self-efficacy, and satisfaction with health outcomes of walking group attendance appeared to be important for objectively measured maintenance, whereas self-reported habit appeared not to be important for maintenance at walking groups. The findings suggest that there is a need for intervention studies to boost recovery self-efficacy and satisfaction with outcomes of walking group attendance, to assess impact on maintenance.
Lévy matters VI Lévy-type processes moments, construction and heat kernel estimates
Kühn, Franziska
2017-01-01
Presenting some recent results on the construction and the moments of Lévy-type processes, the focus of this volume is on a new existence theorem, which is proved using a parametrix construction. Applications range from heat kernel estimates for a class of Lévy-type processes to existence and uniqueness theorems for Lévy-driven stochastic differential equations with Hölder continuous coefficients. Moreover, necessary and sufficient conditions for the existence of moments of Lévy-type processes are studied and some estimates on moments are derived. Lévy-type processes behave locally like Lévy processes but, in contrast to Lévy processes, they are not homogeneous in space. Typical examples are processes with varying index of stability and solutions of Lévy-driven stochastic differential equations. This is the sixth volume in a subseries of the Lecture Notes in Mathematics called Lévy Matters. Each volume describes a number of important topics in the theory or applicati ons of Lévy processes and pays ...
Estimation of resource savings due to fly ash utilization in road construction
Energy Technology Data Exchange (ETDEWEB)
Kumar, Subodh; Patil, C.B. [Centre for Energy Studies, Indian Institute of Technology, New Delhi 110016 (India)
2006-08-15
A methodology for estimation of natural resource savings due to fly ash utilization in road construction in India is presented. Analytical expressions for the savings of various resources namely soil, stone aggregate, stone chips, sand and cement in the embankment, granular sub-base (GSB), water bound macadam (WBM) and pavement quality concrete (PQC) layers of fly ash based road formation with flexible and rigid pavements of a given geometry have been developed. The quantity of fly ash utilized in these layers of different pavements has also been quantified. In the present study, the maximum amount of resource savings is found in GSB followed by WBM and other layers of pavement. The soil quantity saved increases asymptotically with the rise in the embankment height. The results of financial analysis based on Indian fly ash based road construction cost data indicate that the savings in construction cost decrease with the lead and the investment on this alternative is found to be financially attractive only for a lead less than 60 and 90km for flexible and rigid pavements, respectively. (author)
An estimator-based distributed voltage-predictive control strategy for ac islanded microgrids
DEFF Research Database (Denmark)
Wang, Yanbo; Chen, Zhe; Wang, Xiongfei
2015-01-01
This paper presents an estimator-based voltage predictive control strategy for AC islanded microgrids, which is able to perform voltage control without any communication facilities. The proposed control strategy is composed of a network voltage estimator and a voltage predictive controller for each...... and has a good capability to reject uncertain perturbations of islanded microgrids....
Testing the Predictive Validity and Construct of Pathological Video Game Use
Groves, Christopher L.; Gentile, Douglas; Tapscott, Ryan L.; Lynch, Paul J.
2015-01-01
Three studies assessed the construct of pathological video game use and tested its predictive validity. Replicating previous research, Study 1 produced evidence of convergent validity in 8th and 9th graders (N = 607) classified as pathological gamers. Study 2 replicated and extended the findings of Study 1 with college undergraduates (N = 504). Predictive validity was established in Study 3 by measuring cue reactivity to video games in college undergraduates (N = 254), such that pathological gamers were more emotionally reactive to and provided higher subjective appraisals of video games than non-pathological gamers and non-gamers. The three studies converged to show that pathological video game use seems similar to other addictions in its patterns of correlations with other constructs. Conceptual and definitional aspects of Internet Gaming Disorder are discussed. PMID:26694472
Testing the Predictive Validity and Construct of Pathological Video Game Use
Directory of Open Access Journals (Sweden)
Christopher L. Groves
2015-12-01
Full Text Available Three studies assessed the construct of pathological video game use and tested its predictive validity. Replicating previous research, Study 1 produced evidence of convergent validity in 8th and 9th graders (N = 607 classified as pathological gamers. Study 2 replicated and extended the findings of Study 1 with college undergraduates (N = 504. Predictive validity was established in Study 3 by measuring cue reactivity to video games in college undergraduates (N = 254, such that pathological gamers were more emotionally reactive to and provided higher subjective appraisals of video games than non-pathological gamers and non-gamers. The three studies converged to show that pathological video game use seems similar to other addictions in its patterns of correlations with other constructs. Conceptual and definitional aspects of Internet Gaming Disorder are discussed.
Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher
2015-07-01
Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.
Climate Prediction Center (CPC) Rainfall Estimator (RFE) for Africa
National Oceanic and Atmospheric Administration, Department of Commerce — As of January 1, 2001, RFE version 2.0 has been implemented by NOAA?s Climate Prediction Center. Created by Ping-Ping Xie, this replaces RFE 1.0 the previous...
A general predictive model for estimating monthly ecosystem evapotranspiration
Ge Sun; Karrin Alstad; Jiquan Chen; Shiping Chen; Chelcy R. Ford; al. et.
2011-01-01
Accurately quantifying evapotranspiration (ET) is essential for modelling regional-scale ecosystem water balances. This study assembled an ET data set estimated from eddy flux and sapflow measurements for 13 ecosystems across a large climatic and management gradient from the United States, China, and Australia. Our objectives were to determine the relationships among...
prediction of path loss estimate for a frequency modulation (fm)
African Journals Online (AJOL)
Orinya
Nigeria, FM station, Makurdi which is normally a major component in the ... and can be used to estimate path losses of FM signals in Benue State of ... limited in equipment to measure all the .... path loss while MATLAB R2007b software was.
Parameter estimation and prediction of nonlinear biological systems: some examples
Doeswijk, T.G.; Keesman, K.J.
2006-01-01
Rearranging and reparameterizing a discrete-time nonlinear model with polynomial quotient structure in input, output and parameters (xk = f(Z, p)) leads to a model linear in its (new) parameters. As a result, the parameter estimation problem becomes a so-called errors-in-variables problem for which
Review Genetic prediction models and heritability estimates for ...
African Journals Online (AJOL)
edward
2015-05-09
May 9, 2015 ... Heritability estimates for functional longevity have been expressed on an original or a logarithmic scale with PH models. Ducrocq & Casella (1996) defined heritability on a logarithmic scale and modified under simulation to incorporate the tri-gamma function (γ) as used by Sasaki et al. (2012) and Terawaki ...
Spatial Working Memory Capacity Predicts Bias in Estimates of Location
Crawford, L. Elizabeth; Landy, David; Salthouse, Timothy A.
2016-01-01
Spatial memory research has attributed systematic bias in location estimates to a combination of a noisy memory trace with a prior structure that people impose on the space. Little is known about intraindividual stability and interindividual variation in these patterns of bias. In the current work, we align recent empirical and theoretical work on…
Review Genetic prediction models and heritability estimates for ...
African Journals Online (AJOL)
edward
2015-05-09
May 9, 2015 ... Instead, through stepwise inclusion of type traits in the PH model, the .... Great Britain uses a bivariate animal model for all breeds, ... Štípková, 2012) and then applying linear models to the combined datasets with the ..... multivariate analyses, it is difficult to use indicator traits to estimate longevity early in life ...
Rahmat, Normala; Buntat, Yahya; Ayub, Abdul Rahman
2015-01-01
The level of the employability skills of the graduates as determined by job role and mapped to the employability skills, which correspond to the requirement of employers, will have significant impact on the graduates’ job performance. The main objective of this study was to identify the constructs and dimensions of employability skills, which can predict the work performance of electronic polytechnic graduate in electrical and electronics industry. A triangular qualitative approach was used i...
Prediction of indoor radon concentration based on residence location and construction
International Nuclear Information System (INIS)
Maekelaeinen, I.; Voutilainen, A.; Castren, O.
1992-01-01
We have constructed a model for assessing indoor radon concentrations in houses where measurements cannot be performed. It has been used in an epidemiological study and to determine the radon potential of new building sites. The model is based on data from about 10,000 buildings. Integrated radon measurements were made during the cold season in all the houses; their geographic coordinates were also known. The 2-mo measurement results were corrected to annual average concentrations. Construction data were collected from questionnaires completed by residents; geological data were determined from geological maps. Data were classified according to geographical, geological, and construction factors. In order to describe different radon production levels, the country was divided into four zones. We assumed that the factors were multiplicative, and a linear concentration-prediction model was used. The most significant factor in determining radon concentration was the geographical region, followed by soil type, year of construction, and type of foundation. The predicted indoor radon concentrations given by the model varied from 50 to 440 Bq m -3 . The lower figure represents a house with a basement, built in the 1950s on clay soil, in the region with the lowest radon concentration levels. The higher value represents a house with a concrete slab in contact with the ground, built in the 1980s, on gravel, in the region with the highest average radon concentration
Estimation of Costs and Durations of Construction of Urban Roads Using ANN and SVM
Directory of Open Access Journals (Sweden)
Igor Peško
2017-01-01
Full Text Available Offer preparation has always been a specific part of a building process which has significant impact on company business. Due to the fact that income greatly depends on offer’s precision and the balance between planned costs, both direct and overheads, and wished profit, it is necessary to prepare a precise offer within required time and available resources which are always insufficient. The paper presents a research of precision that can be achieved while using artificial intelligence for estimation of cost and duration in construction projects. Both artificial neural networks (ANNs and support vector machines (SVM are analysed and compared. The best SVM has shown higher precision, when estimating costs, with mean absolute percentage error (MAPE of 7.06% compared to the most precise ANNs which has achieved precision of 25.38%. Estimation of works duration has proved to be more difficult. The best MAPEs were 22.77% and 26.26% for SVM and ANN, respectively.
CSIR Research Space (South Africa)
Kirton, A
2010-08-01
Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...
Latent degradation indicators estimation and prediction: A Monte Carlo approach
Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin
2011-01-01
Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.
Adaptive Disturbance Estimation for Offset-Free SISO Model Predictive Control
DEFF Research Database (Denmark)
Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay
2011-01-01
Offset free tracking in Model Predictive Control requires estimation of unmeasured disturbances or the inclusion of an integrator. An algorithm for estimation of an unknown disturbance based on adaptive estimation with time varying forgetting is introduced and benchmarked against the classical...
International Nuclear Information System (INIS)
Shin, Koichi; Sawada, Masataka; Inohara, Yoshiki; Shidahara, Takumi; Hatano, Teruyoshi
2011-01-01
For the selection of the Detailed Investigation Areas for HLW disposal, predicting the tunnel constructability is one of the requirements together with assessing long-term safety. This report is the 1st of the three papers dealing with the evaluation of tunnel constructability. This paper deals with the geological factors relating to difficult tunneling such as squeezing, rockburst, and others. Also it deals with the prediction of rockburst. The 2nd paper will deal with the prediction of squeezing. The 3rd paper deals with the engineering characteristics of rock mass through rock mass classification. This paper about difficult tunneling has been based upon analysis of more than 500 tunneling reports about 280 tunnel constructions. The causes of difficult tunneling are related to (1) underground water, (2) mechanical properties of the rock, or (3) others such as gas. The geological factors for excessive water inflow are porous volcanic product of Quarternary, fault crush zone and hydrothermally altered zone of Green Tuff area, and degenerated mixed rock in accretionary complex. The geological factors for squeezing are solfataric clay at Quarternary volcanic zone, fault crush zone and hydrothermally altered zone of Green Tuff area, mudstone and fault crush zone of sedimentary rock of Neogene and later. Information useful for predicting rockburst has been gathered from previous reports. In the preliminary investigation stage, geological survey, geophysical survey and borehole survey from the surface are the source of information. Therefore rock type, P-wave velocity from seismic exploration and in-situ rock stress from hydrofracturing have been considered. Majority of rockburst events occurred at granitic rock, excluding coal mine where different kind of rockburst occurred at pillars. And P-wave velocity was around 5 km/s at the rock of rockburst events. Horizontal maximum and minimum stresses SH and Sh have been tested as a criterion for rockburst. It has been
International Nuclear Information System (INIS)
Neil, D.M.; Taylor, D.L.
1991-01-01
The Yucca Mountain site characterization program will be based on mechanical excavation techniques for the mined repository construction and development. Tunnel Boring Machines (TBM's), Mobile Miners (MM), Raiseborers (RB), Blind Hole Shaft Boring Machines (BHSB), and Roadheaders (RH) have been selected as the mechanical excavation machines most suited to mine the densely welded and non-welded tuffs of the Topopah Springs and Calico Hills members. Heavy duty RH in the 70 to 100 ton class with 300 Kw cutter motors have been evaluated and formulas developed to predict machine performance based on the rock physical properties and the results of Linear Cutting Machine (LCM) tests done at the Colorado School of Mines (CSM) for Sandia National Labs. (SNL)
Directory of Open Access Journals (Sweden)
Rui Zhang
2014-12-01
Full Text Available This paper presents a hierarchical approach to network construction and time series estimation in persistent scatterer interferometry (PSI for deformation analysis using the time series of high-resolution satellite SAR images. To balance between computational efficiency and solution accuracy, a dividing and conquering algorithm (i.e., two levels of PS networking and solution is proposed for extracting deformation rates of a study area. The algorithm has been tested using 40 high-resolution TerraSAR-X images collected between 2009 and 2010 over Tianjin in China for subsidence analysis, and validated by using the ground-based leveling measurements. The experimental results indicate that the hierarchical approach can remarkably reduce computing time and memory requirements, and the subsidence measurements derived from the hierarchical solution are in good agreement with the leveling data.
Huang, Chun-Jung; Wang, Hsiao-Fan; Chiu, Hsien-Jane; Lan, Tsuo-Hung; Hu, Tsung-Ming; Loh, El-Wui
2010-10-01
Although schizophrenia can be treated, most patients still experience inevitable psychotic episodes from time to time. Precautious actions can be taken if the next onset can be predicted. However, sufficient information is always lacking in the clinical scenario. A possible solution is to use the virtual data generated from limited of original data. Data construction method (DCM) has been shown to generate the virtual felt earthquake data effectively and used in the prediction of further events. Here we investigated the performance of DCM in deriving the membership functions and discrete-event simulations (DES) in predicting the period embracing the initiation and termination time-points of the next psychotic episode of 35 individual schizophrenic patients. The results showed that 21 subjects had a success of simulations (RSS) ≥70%. Further analysis demonstrated that the co-morbidity of coronary heart diseases (CHD), risks of CHD, and the frequency of previous psychotic episodes increased the RSS.
Hernandez, D. W.
2012-12-01
The CDRP is a major construction project involving up to 400 workers using heavy earth moving equipment, blasting, drilling, rock crushing, and other techniques designed to move 7 million yards of earth. Much of this material is composed of serpentinite, blueschist, and other rocks that contain chrysotile, crocidolite, actinolite, tremolite, and Libby-class amphiboles. To date, over 1,000 personal, work area, and emission inventory related samples have been collected and analyzed by NIOSH 7400, NIOSH 7402, and CARB-AHERA methodology. Data indicate that various CDRP construction activities have the potential to generate significant mineral fibers and structures that could represent elevated on site and off site health risks. This presentation will review the Contractors air monitoring program for this major project, followed by a discussion of predictive methods to evaluate potential onsite and offsite risks. Ultimately, the data are used for planning control strategies designed to achieve a Project Action Level of 0.01 f/cc (one tenth the Cal/OSHA PEL) and risk-based offsite target levels.
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.
Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter
2013-12-06
In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least
Estimation of building-related construction and demolition waste in Shanghai.
Ding, Tao; Xiao, Jianzhuang
2014-11-01
One methodology is proposed to estimate the quantification and composition of building-related construction and demolition (C&D) waste in a fast developing region like Shanghai, PR China. The varieties of structure types and building waste intensities due to the requirement of progressive building design and structure codes in different decades are considered in this regional C&D waste estimation study. It is concluded that approximately 13.71 million tons of C&D waste was generated in 2012 in Shanghai, of which more than 80% of this C&D waste was concrete, bricks and blocks. Analysis from this study can be applied to facilitate C&D waste governors and researchers the duty of formulating precise policies and specifications. As a matter of fact, at least a half of the enormous amount of C&D waste could be recycled if implementing proper recycling technologies and measures. The appropriate managements would be economically and environmentally beneficial to Shanghai where the per capita per year output of C&D waste has been as high as 842 kg in 2010. Copyright © 2014 Elsevier Ltd. All rights reserved.
Construction of ground-state preserving sparse lattice models for predictive materials simulations
Huang, Wenxuan; Urban, Alexander; Rong, Ziqin; Ding, Zhiwei; Luo, Chuan; Ceder, Gerbrand
2017-08-01
First-principles based cluster expansion models are the dominant approach in ab initio thermodynamics of crystalline mixtures enabling the prediction of phase diagrams and novel ground states. However, despite recent advances, the construction of accurate models still requires a careful and time-consuming manual parameter tuning process for ground-state preservation, since this property is not guaranteed by default. In this paper, we present a systematic and mathematically sound method to obtain cluster expansion models that are guaranteed to preserve the ground states of their reference data. The method builds on the recently introduced compressive sensing paradigm for cluster expansion and employs quadratic programming to impose constraints on the model parameters. The robustness of our methodology is illustrated for two lithium transition metal oxides with relevance for Li-ion battery cathodes, i.e., Li2xFe2(1-x)O2 and Li2xTi2(1-x)O2, for which the construction of cluster expansion models with compressive sensing alone has proven to be challenging. We demonstrate that our method not only guarantees ground-state preservation on the set of reference structures used for the model construction, but also show that out-of-sample ground-state preservation up to relatively large supercell size is achievable through a rapidly converging iterative refinement. This method provides a general tool for building robust, compressed and constrained physical models with predictive power.
Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang
2018-01-01
Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm
Emerging Tools to Estimate and to Predict Exposures to ...
The timely assessment of the human and ecological risk posed by thousands of existing and emerging commercial chemicals is a critical challenge facing EPA in its mission to protect public health and the environment The US EPA has been conducting research to enhance methods used to estimate and forecast exposures for tens of thousands of chemicals. This research is aimed at both assessing risks and supporting life cycle analysis, by developing new models and tools for high throughput exposure screening and prioritization, as well as databases that support these and other tools, especially regarding consumer products. The models and data address usage, and take advantage of quantitative structural activity relationships (QSARs) for both inherent chemical properties and function (why the chemical is a product ingredient). To make them more useful and widely available, the new tools, data and models are designed to be: • Flexible • Intraoperative • Modular (useful to more than one, stand-alone application) • Open (publicly available software) Presented at the Society for Risk Analysis Forum: Risk Governance for Key Enabling Technologies, Venice, Italy, March 1-3, 2017
Nonlinear Model Predictive Control for Cooperative Control and Estimation
Ru, Pengkai
Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.
International Nuclear Information System (INIS)
Emelyanov, V.
1994-01-01
RESURS programme is described implementation of which will allow to work out regulatory-methodological basis providing legal and technical solution of NPP equipment lifetime management, prediction, monitoring and estimation problems
An Evaluation of Growth Models as Predictive Tools for Estimates at Completion (EAC)
National Research Council Canada - National Science Library
Trahan, Elizabeth N
2009-01-01
...) as the Estimates at Completion (EAC). Our research evaluates the prospect of nonlinear growth modeling as an alternative to the current predictive tools used for calculating EAC, such as the Cost Performance Index (CPI...
A practical approach to parameter estimation applied to model predicting heart rate regulation
DEFF Research Database (Denmark)
Olufsen, Mette; Ottesen, Johnny T.
2013-01-01
Mathematical models have long been used for prediction of dynamics in biological systems. Recently, several efforts have been made to render these models patient specific. One way to do so is to employ techniques to estimate parameters that enable model based prediction of observed quantities....... Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...... a subset of model parameters reducing the complexity of the problem. In this study, we compare three methods that allow identification of parameter subsets that can be estimated given a model and a set of data. These methods will be used to estimate patient specific parameters in a model predicting...
SAS-macros for estimation and prediction in an model of the electricity consumption
DEFF Research Database (Denmark)
1998-01-01
SAS-macros for estimation and prediction in an model of the electricity consumption'' is a large collection of SAS-macros for handling a model of the electricity consumption in the Eastern Denmark. The macros are installed at Elkraft, Ballerup.......SAS-macros for estimation and prediction in an model of the electricity consumption'' is a large collection of SAS-macros for handling a model of the electricity consumption in the Eastern Denmark. The macros are installed at Elkraft, Ballerup....
Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method
DEFF Research Database (Denmark)
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik
1993-01-01
Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....
Southwell, Colin; Emmerson, Louise; Newbery, Kym; McKinlay, John; Kerry, Knowles; Woehler, Eric; Ensor, Paul
2015-01-01
Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.
Directory of Open Access Journals (Sweden)
Colin Southwell
Full Text Available Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.
Development of a hybrid model to predict construction and demolition waste: China as a case study.
Song, Yiliao; Wang, Yong; Liu, Feng; Zhang, Yixin
2017-01-01
Construction and demolition waste (C&DW) is currently a worldwide issue, and the situation is the worst in China due to a rapid increase in the construction industry and the short life span of China's buildings. To create an opportunity out of this problem, comprehensive prevention measures and effective management strategies are urgently needed. One major gap in the literature of waste management is a lack of estimations on future C&DW generation. Therefore, this paper presents a forecasting procedure for C&DW in China that can forecast the quantity of each component in such waste. The proposed approach is based on a GM-SVR model that improves the forecasting effectiveness of the gray model (GM), which is achieved by adjusting the residual series by a support vector regression (SVR) method and a transition matrix that aims to estimate the discharge of each component in the C&DW. Through the proposed method, future C&DW volume are listed and analyzed containing their potential components and distribution in different provinces in China. Besides, model testing process provides mathematical evidence to validate the proposed model is an effective way to give future information of C&DW for policy makers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimating Required Contingency Funds for Construction Projects using Multiple Linear Regression
National Research Council Canada - National Science Library
Cook, Jason J
2006-01-01
Cost overruns are a critical problem for construction projects. The common practice for dealing with cost overruns is the assignment of an arbitrary flat percentage of the construction budget as a contingency fund...
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Vitásek, Stanislav; Matějka, Petr
2017-09-01
The article deals with problematic parts of automated processing of quantity takeoff (QTO) from data generated in BIM model. It focuses on models of road constructions, and uses volumes and dimensions of excavation work to create an estimate of construction costs. The article uses a case study and explorative methods to discuss possibilities and problems of data transfer from a model to a price system of construction production when such transfer is used for price estimates of construction works. Current QTOs and price tenders are made with 2D documents. This process is becoming obsolete because more modern tools can be used. The BIM phenomenon enables partial automation in processing volumes and dimensions of construction units and matching the data to units in a given price scheme. Therefore price of construction can be estimated and structured without lengthy and often imprecise manual calculations. The use of BIM for QTO is highly dependent on local market budgeting systems, therefore proper push/pull strategy is required. It also requires proper requirements specification, compatible pricing database and software.
Directory of Open Access Journals (Sweden)
Roca Miquel
2011-01-01
Full Text Available Abstract Background Fibromyalgia (FM is a prevalent and disabling disorder characterized by a history of widespread pain for at least three months. Pain is considered a complex experience in which affective and cognitive aspects are crucial for prognosis. The aim of this study is to assess the importance of pain-related psychological constructs on function and pain in patients with FM. Methods Design Multicentric, naturalistic, one-year follow-up study. Setting and study sample. Patients will be recruited from primary care health centres in the region of Aragon, Spain. Patients considered for inclusion are those aged 18-65 years, able to understand Spanish, who fulfil criteria for primary FM according to the American College of Rheumatology, with no previous psychological treatment. Measurements The variables measured will be the following: main variables (pain assessed with a visual analogue scale and with sphygmomanometer and general function assessed with Fibromyalgia Impact Questionnaire, and, psychological constructs (pain catastrophizing, pain acceptance, mental defeat, psychological inflexibility, perceived injustice, mindfulness, and positive and negative affect, and secondary variables (sociodemographic variables, anxiety and depression assessed with Hospital Anxiety and Depression Scale, and psychiatric interview assessed with MINI. Assessments will be carried at baseline and at one-year follow-up. Main outcome Pain Visual Analogue Scale. Analysis The existence of differences in socio-demographic, main outcome and other variables regarding pain-related psychological constructs will be analysed using Chi Square test for qualitative variables, or Student t test or variance analysis, respectively, for variables fulfilling the normality hypothesis. To assess the predictive value of pain-related psychological construct on main outcome variables at one-year follow-up, use will be made of a logistic regression analysis adjusted for socio
Parameter estimation techniques and uncertainty in ground water flow model predictions
International Nuclear Information System (INIS)
Zimmerman, D.A.; Davis, P.A.
1990-01-01
Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs
Energy Technology Data Exchange (ETDEWEB)
Kwon, Seung Hee; Jang, Kyung Pil [Department of Civil and Environmental Engineering, Myongji University, Yongin (Korea, Republic of); Bang, Jin-Wook [Department of Civil Engineering, Chungnam National University, Daejeon (Korea, Republic of); Lee, Jang Hwa [Structural Engineering Research Division, Korea Institute of Construction Technology (Korea, Republic of); Kim, Yun Yong, E-mail: yunkim@cnu.ac.kr [Structural Engineering Research Division, Korea Institute of Construction Technology (Korea, Republic of)
2014-08-15
Highlights: • Compressive strength tests for three concrete mixes were performed. • The parameters of the humidity-adjusted maturity function were determined. • Strength can be predicted considering temperature and relative humidity. - Abstract: This study proposes a method for predicting compressive strength developments in the early ages of concretes used in the construction of nuclear power plants. Three representative mixes with strengths of 6000 psi (41.4 MPa), 4500 psi (31.0 MPa), and 4000 psi (27.6 MPa) were selected and tested under various curing conditions; the temperature ranged from 10 to 40 °C, and the relative humidity from 40 to 100%. In order to consider not only the effect of the temperature but also that of humidity, an existing model, i.e. the humidity-adjusted maturity function, was adopted and the parameters used in the function were determined from the test results. A series of tests were also performed in the curing condition of a variable temperature and constant humidity, and a comparison between the measured and predicted strengths were made for the verification.
Nuland, H.J.C. van; Dusseldorp, E.; Martens, R.L.; Boekaerts, M.
2010-01-01
Different theoretical viewpoints on motivation make it hard to decide which model has the best potential to provide valid predictions on classroom performance. This study was designed to explore motivation constructs derived from different motivation perspectives that predict performance on a novel
Method for estimating capacity and predicting remaining useful life of lithium-ion battery
International Nuclear Information System (INIS)
Hu, Chao; Jain, Gaurav; Tamirisa, Prabhakar; Gorka, Tom
2014-01-01
Highlights: • We develop an integrated method for the capacity estimation and RUL prediction. • A state projection scheme is derived for capacity estimation. • The Gauss–Hermite particle filter technique is used for the RUL prediction. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the capacity of Li-ion battery and predict the remaining useful life (RUL) throughout the whole life-time. This paper presents an integrated method for the capacity estimation and RUL prediction of Li-ion battery used in implantable medical devices. A state projection scheme from the author’s previous study is used for the capacity estimation. Then, based on the capacity estimates, the Gauss–Hermite particle filter technique is used to project the capacity fade to the end-of-service (EOS) value (or the failure limit) for the RUL prediction. Results of 10 years’ continuous cycling test on Li-ion prismatic cells in the lab suggest that the proposed method achieves good accuracy in the capacity estimation and captures the uncertainty in the RUL prediction. Post-explant weekly cycling data obtained from field cells with 4–7 implant years further verify the effectiveness of the proposed method in the capacity estimation
This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estima...
Greenbaum, Gili; Renan, Sharon; Templeton, Alan R; Bouskila, Amos; Saltz, David; Rubenstein, Daniel I; Bar-David, Shirli
2017-12-22
Effective population size, a central concept in conservation biology, is now routinely estimated from genetic surveys and can also be theoretically predicted from demographic, life-history, and mating-system data. By evaluating the consistency of theoretical predictions with empirically estimated effective size, insights can be gained regarding life-history characteristics and the relative impact of different life-history traits on genetic drift. These insights can be used to design and inform management strategies aimed at increasing effective population size. We demonstrated this approach by addressing the conservation of a reintroduced population of Asiatic wild ass (Equus hemionus). We estimated the variance effective size (N ev ) from genetic data (N ev =24.3) and formulated predictions for the impacts on N ev of demography, polygyny, female variance in lifetime reproductive success (RS), and heritability of female RS. By contrasting the genetic estimation with theoretical predictions, we found that polygyny was the strongest factor affecting genetic drift because only when accounting for polygyny were predictions consistent with the genetically measured N ev . The comparison of effective-size estimation and predictions indicated that 10.6% of the males mated per generation when heritability of female RS was unaccounted for (polygyny responsible for 81% decrease in N ev ) and 19.5% mated when female RS was accounted for (polygyny responsible for 67% decrease in N ev ). Heritability of female RS also affected N ev ; hf2=0.91 (heritability responsible for 41% decrease in N ev ). The low effective size is of concern, and we suggest that management actions focus on factors identified as strongly affecting Nev, namely, increasing the availability of artificial water sources to increase number of dominant males contributing to the gene pool. This approach, evaluating life-history hypotheses in light of their impact on effective population size, and contrasting
Constructing an everywhere and locally relevant predictive model of the West-African critical zone
Hector, B.; Cohard, J. M.; Pellarin, T.; Maxwell, R. M.; Cappelaere, B.; Demarty, J.; Grippa, M.; Kergoat, L.; Lebel, T.; Mamadou, O.; Mougin, E.; Panthou, G.; Peugeot, C.; Vandervaere, J. P.; Vischel, T.; Vouillamoz, J. M.
2017-12-01
Considering water resources and hydrologic hazards, West Africa is among the most vulnerable regions to face both climatic (e.g. with the observed intensification of precipitation) and anthropogenic changes. With +3% of demographic rate, the region experiences rapid land use changes and increased pressure on surface and groundwater resources with observed consequences on the hydrological cycle (water table rise result of the sahelian paradox, increase in flood occurrence, etc.) Managing large hydrosystems (such as transboundary aquifers or rivers basins as the Niger river) requires anticipation of such changes. However, the region significantly lacks observations, for constructing and validating critical zone (CZ) models able to predict future hydrologic regime, but also comprises hydrosystems which encompass strong environmental gradients (e.g. geological, climatic, ecological) with highly different dominating hydrological processes. We address these issues by constructing a high resolution (1 km²) regional scale physically-based model using ParFlow-CLM which allows modeling a wide range of processes without prior knowledge on their relative dominance. Our approach combines multiple scale modeling from local to meso and regional scales within the same theoretical framework. Local and meso-scale models are evaluated thanks to the rich AMMA-CATCH CZ observation database which covers 3 supersites with contrasted environments in Benin (Lat.: 9.8°N), Niger (Lat.: 13.3°N) and Mali (Lat.: 15.3°N). At the regional scale the lack of relevant map of soil hydrodynamic parameters is addressed using remote sensing data assimilation. Our first results show the model's ability to reproduce the known dominant hydrological processes (runoff generation, ET, groundwater recharge…) across the major West-African regions and allow us to conduct virtual experiments to explore the impact of global changes on the hydrosystems. This approach is a first step toward the construction of
Construction and evaluation of yeast expression networks by database-guided predictions
Directory of Open Access Journals (Sweden)
Katharina Papsdorf
2016-05-01
Full Text Available DNA-Microarrays are powerful tools to obtain expression data on the genome-wide scale. We performed microarray experiments to elucidate the transcriptional networks, which are up- or down-regulated in response to the expression of toxic polyglutamine proteins in yeast. Such experiments initially generate hit lists containing differentially expressed genes. To look into transcriptional responses, we constructed networks from these genes. We therefore developed an algorithm, which is capable of dealing with very small numbers of microarrays by clustering the hits based on co-regulatory relationships obtained from the SPELL database. Here, we evaluate this algorithm according to several criteria and further develop its statistical capabilities. Initially, we define how the number of SPELL-derived co-regulated genes and the number of input hits influences the quality of the networks. We then show the ability of our networks to accurately predict further differentially expressed genes. Including these predicted genes into the networks improves the network quality and allows quantifying the predictive strength of the networks based on a newly implemented scoring method. We find that this approach is useful for our own experimental data sets and also for many other data sets which we tested from the SPELL microarray database. Furthermore, the clusters obtained by the described algorithm greatly improve the assignment to biological processes and transcription factors for the individual clusters. Thus, the described clustering approach, which will be available through the ClusterEx web interface, and the evaluation parameters derived from it represent valuable tools for the fast and informative analysis of yeast microarray data.
Directory of Open Access Journals (Sweden)
Kihara Daisuke
2010-05-01
Full Text Available Abstract Background A new paradigm of biological investigation takes advantage of technologies that produce large high throughput datasets, including genome sequences, interactions of proteins, and gene expression. The ability of biologists to analyze and interpret such data relies on functional annotation of the included proteins, but even in highly characterized organisms many proteins can lack the functional evidence necessary to infer their biological relevance. Results Here we have applied high confidence function predictions from our automated prediction system, PFP, to three genome sequences, Escherichia coli, Saccharomyces cerevisiae, and Plasmodium falciparum (malaria. The number of annotated genes is increased by PFP to over 90% for all of the genomes. Using the large coverage of the function annotation, we introduced the functional similarity networks which represent the functional space of the proteomes. Four different functional similarity networks are constructed for each proteome, one each by considering similarity in a single Gene Ontology (GO category, i.e. Biological Process, Cellular Component, and Molecular Function, and another one by considering overall similarity with the funSim score. The functional similarity networks are shown to have higher modularity than the protein-protein interaction network. Moreover, the funSim score network is distinct from the single GO-score networks by showing a higher clustering degree exponent value and thus has a higher tendency to be hierarchical. In addition, examining function assignments to the protein-protein interaction network and local regions of genomes has identified numerous cases where subnetworks or local regions have functionally coherent proteins. These results will help interpreting interactions of proteins and gene orders in a genome. Several examples of both analyses are highlighted. Conclusion The analyses demonstrate that applying high confidence predictions from PFP
Dang, Mia; Ramsaran, Kalinda D; Street, Melissa E; Syed, S Noreen; Barclay-Goddard, Ruth; Stratford, Paul W; Miller, Patricia A
2011-01-01
To estimate the predictive accuracy and clinical usefulness of the Chedoke-McMaster Stroke Assessment (CMSA) predictive equations. A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from -0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted.
Dang, Mia; Ramsaran, Kalinda D.; Street, Melissa E.; Syed, S. Noreen; Barclay-Goddard, Ruth; Miller, Patricia A.
2011-01-01
ABSTRACT Purpose: To estimate the predictive accuracy and clinical usefulness of the Chedoke–McMaster Stroke Assessment (CMSA) predictive equations. Method: A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Results: Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from −0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. Conclusions: This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted. PMID:22654239
Directory of Open Access Journals (Sweden)
S Hong Lee
Full Text Available Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne. Both the effective number of chromosome segments (Me and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data in animal, plant and human genetics.
Directory of Open Access Journals (Sweden)
Gerhard Moser
2015-04-01
Full Text Available Gene discovery, estimation of heritability captured by SNP arrays, inference on genetic architecture and prediction analyses of complex traits are usually performed using different statistical models and methods, leading to inefficiency and loss of power. Here we use a Bayesian mixture model that simultaneously allows variant discovery, estimation of genetic variance explained by all variants and prediction of unobserved phenotypes in new samples. We apply the method to simulated data of quantitative traits and Welcome Trust Case Control Consortium (WTCCC data on disease and show that it provides accurate estimates of SNP-based heritability, produces unbiased estimators of risk in new samples, and that it can estimate genetic architecture by partitioning variation across hundreds to thousands of SNPs. We estimated that, depending on the trait, 2,633 to 9,411 SNPs explain all of the SNP-based heritability in the WTCCC diseases. The majority of those SNPs (>96% had small effects, confirming a substantial polygenic component to common diseases. The proportion of the SNP-based variance explained by large effects (each SNP explaining 1% of the variance varied markedly between diseases, ranging from almost zero for bipolar disorder to 72% for type 1 diabetes. Prediction analyses demonstrate that for diseases with major loci, such as type 1 diabetes and rheumatoid arthritis, Bayesian methods outperform profile scoring or mixed model approaches.
ESTIMATING INJURIOUS IMPACT IN CONSTRUCTION LIFE CYCLE ASSESSMENTS: A PROSPECTIVE STUDY
Directory of Open Access Journals (Sweden)
McDevitt, James E.
2012-04-01
Full Text Available This paper is the result of a desire to include social factors alongside environmental and economic considerations in Life Cycle Assessment studies for the construction sector. We describe a specific search for a method to include injurious impact for construction Life Cycle Assessment studies, by evaluating a range of methods and data sources. A simple case study using selected Accident Compensation Corporation information illustrates that data relating to injury could provide a compelling evidence to cause changes in construction supply chains, and could provide an economic motive to pursue further research in this area. The paper concludes that limitations notwithstanding, the suggested approach could be useful as a fast and cheap high level tool that can accelerate the discussions and research agenda that will bring about the inclusion of social metrics in construction sector supply chain management and declarations.
Predictive Uncertainty Estimation in Water Demand Forecasting Using the Model Conditional Processor
Directory of Open Access Journals (Sweden)
Amos O. Anele
2018-04-01
Full Text Available In a previous paper, a number of potential models for short-term water demand (STWD prediction have been analysed to find the ones with the best fit. The results obtained in Anele et al. (2017 showed that hybrid models may be considered as the accurate and appropriate forecasting models for STWD prediction. However, such best single valued forecast does not guarantee reliable and robust decisions, which can be properly obtained via model uncertainty processors (MUPs. MUPs provide an estimate of the full predictive densities and not only the single valued expected prediction. Amongst other MUPs, the purpose of this paper is to use the multi-variate version of the model conditional processor (MCP, proposed by Todini (2008, to demonstrate how the estimation of the predictive probability conditional to a number of relatively good predictive models may improve our knowledge, thus reducing the predictive uncertainty (PU when forecasting into the unknown future. Through the MCP approach, the probability distribution of the future water demand can be assessed depending on the forecast provided by one or more deterministic forecasting models. Based on an average weekly data of 168 h, the probability density of the future demand is built conditional on three models’ predictions, namely the autoregressive-moving average (ARMA, feed-forward back propagation neural network (FFBP-NN and hybrid model (i.e., combined forecast from ARMA and FFBP-NN. The results obtained show that MCP may be effectively used for real-time STWD prediction since it brings out the PU connected to its forecast, and such information could help water utilities estimate the risk connected to a decision.
Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette
2009-01-01
Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.
Kernel density estimation-based real-time prediction for respiratory motion
International Nuclear Information System (INIS)
Ruan, Dan
2010-01-01
Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the
Cole, Adam G; Kennedy, Ryan David; Chaurasia, Ashok; Leatherdale, Scott T
2017-12-06
Within tobacco prevention programming, it is useful to identify youth that are at risk for experimenting with various tobacco products and e-cigarettes. The susceptibility to smoking construct is a simple method to identify never-smoking students that are less committed to remaining smoke-free. However, the predictive validity of this construct has not been tested within the Canadian context or for the use of other tobacco products and e-cigarettes. This study used a large, longitudinal sample of secondary school students that reported never using tobacco cigarettes and non-current use of alternative tobacco products or e-cigarettes at baseline in Ontario, Canada. The sensitivity, specificity, and positive and negative predictive values of the susceptibility construct for predicting tobacco cigarette, e-cigarette, cigarillo or little cigar, cigar, hookah, and smokeless tobacco use one and two years after baseline measurement were calculated. At baseline, 29.4% of the sample was susceptible to future tobacco product or e-cigarette use. The sensitivity of the construct ranged from 43.2% (smokeless tobacco) to 59.5% (tobacco cigarettes), the specificity ranged from 70.9% (smokeless tobacco) to 75.9% (tobacco cigarettes), and the positive predictive value ranged from 2.6% (smokeless tobacco) to 32.2% (tobacco cigarettes). Similar values were calculated for each measure of the susceptibility construct. A significant number of youth that did not currently use tobacco products or e-cigarettes at baseline reported using tobacco products and e-cigarettes over a two-year follow-up period. The predictive validity of the susceptibility construct was high and the construct can be used to predict other tobacco product and e-cigarette use among youth. This study presents the predictive validity of the susceptibility construct for the use of tobacco cigarettes among secondary school students in Ontario, Canada. It also presents a novel use of the susceptibility construct for
Maboudi Afkham, Heydar; Qiu, Xuanbin; The, Matthew; Käll, Lukas
2017-02-15
Liquid chromatography is frequently used as a means to reduce the complexity of peptide-mixtures in shotgun proteomics. For such systems, the time when a peptide is released from a chromatography column and registered in the mass spectrometer is referred to as the peptide's retention time . Using heuristics or machine learning techniques, previous studies have demonstrated that it is possible to predict the retention time of a peptide from its amino acid sequence. In this paper, we are applying Gaussian Process Regression to the feature representation of a previously described predictor E lude . Using this framework, we demonstrate that it is possible to estimate the uncertainty of the prediction made by the model. Here we show how this uncertainty relates to the actual error of the prediction. In our experiments, we observe a strong correlation between the estimated uncertainty provided by Gaussian Process Regression and the actual prediction error. This relation provides us with new means for assessment of the predictions. We demonstrate how a subset of the peptides can be selected with lower prediction error compared to the whole set. We also demonstrate how such predicted standard deviations can be used for designing adaptive windowing strategies. lukas.kall@scilifelab.se. Our software and the data used in our experiments is publicly available and can be downloaded from https://github.com/statisticalbiotechnology/GPTime . © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.
Directory of Open Access Journals (Sweden)
Michael F Sloma
2017-11-01
Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.
Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.
Sloma, Michael F; Mathews, David H
2017-11-01
Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.
Kupczewska-Dobecka, Małgorzata; Czerczak, Sławomir; Jakubowski, Marek; Maciaszek, Piotr; Janasik, Beata
2010-01-01
Based on the Estimation and Assessment of Substance Exposure (EASE) predictive model implemented into the European Union System for the Evaluation of Substances (EUSES 2.1.), the exposure to three chosen organic solvents: toluene, ethyl acetate and acetone was estimated and compared with the results of measurements in workplaces. Prior to validation, the EASE model was pretested using three exposure scenarios. The scenarios differed in the decision tree of pattern of use. Five substances were chosen for the test: 1,4-dioxane tert-methyl-butyl ether, diethylamine, 1,1,1-trichloroethane and bisphenol A. After testing the EASE model, the next step was the validation by estimating the exposure level and comparing it with the results of measurements in the workplace. We used the results of measurements of toluene, ethyl acetate and acetone concentrations in the work environment of a paint and lacquer factory, a shoe factory and a refinery. Three types of exposure scenarios, adaptable to the description of working conditions were chosen to estimate inhalation exposure. Comparison of calculated exposure to toluene, ethyl acetate and acetone with measurements in workplaces showed that model predictions are comparable with the measurement results. Only for low concentration ranges, the measured concentrations were higher than those predicted. EASE is a clear, consistent system, which can be successfully used as an additional component of inhalation exposure estimation. If the measurement data are available, they should be preferred to values estimated from models. In addition to inhalation exposure estimation, the EASE model makes it possible not only to assess exposure-related risk but also to predict workers' dermal exposure.
Vertebral body spread in thoracolumbar burst fractures can predict posterior construct failure.
De Iure, Federico; Lofrese, Giorgio; De Bonis, Pasquale; Cultrera, Francesco; Cappuccio, Michele; Battisti, Sofia
2018-06-01
The load sharing classification (LSC) laid foundations for a scoring system able to indicate which thoracolumbar fractures, after short-segment posterior-only fixations, would need longer instrumentations or additional anterior supports. We analyzed surgically treated thoracolumbar fractures, quantifying the vertebral body's fragment displacement with the aim of identifying a new parameter that could predict the posterior-only construct failure. This is a retrospective cohort study from a single institution. One hundred twenty-one consecutive patients were surgically treated for thoracolumbar burst fractures. Grade of kyphosis correction (GKC) expressed radiological outcome; Oswestry Disability Index and visual analog scale were considered. One hundred twenty-one consecutive patients who underwent posterior fixation for unstable thoracolumbar burst fractures were retrospectively evaluated clinically and radiologically. Supplementary anterior fixations were performed in 34 cases with posterior instrumentation failure, determined on clinic-radiological evidence or symptomatic loss of kyphosis correction. Segmental kyphosis angle and GKC were calculated according to the Cobb method. The displacement of fracture fragments was obtained from the mean of the adjacent end plate areas subtracted from the area enclosed by the maximum contour of vertebral fragmentation. The "spread" was derived from the ratio between this subtraction and the mean of the adjacent end plate areas. Analysis of variance, Mann-Whitney, and receiver operating characteristic were performed for statistical analysis. The authors report no conflict of interest concerning the materials or methods used in the present study or the findings specified in this paper. No funds or grants have been received for the present study. The spread revealed to be a helpful quantitative measurement of vertebral body fragment displacement, easily reproducible with the current computed tomography (CT) imaging technologies
Energy Technology Data Exchange (ETDEWEB)
Kropat, Georg, E-mail: georg.kropat@chuv.ch [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Bochud, Francois [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Jaboyedoff, Michel [Faculty of Geosciences and Environment, University of Lausanne, GEOPOLIS — 3793, 1015 Lausanne (Switzerland); Laedermann, Jean-Pascal [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Murith, Christophe; Palacios, Martha [Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland); Baechler, Sébastien [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland)
2015-02-01
Purpose: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. Methods: We looked at about 240 000 IRC measurements carried out in about 150 000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m{sup 3}. Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. Results: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. Conclusions: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements
DEFF Research Database (Denmark)
Alvarez, David L.; Silva, Filipe Miguel Faria da; Mombello, Enrique Esteban
2018-01-01
. This paper presents an algorithm to estimate and predict the temperature in overhead line conductors using an Extended Kalman Filter. The proposed algorithm assumes both actual weather and current intensity flowing along the conductor as control variables. The temperature of the conductor, mechanical tension...
EEG Estimates of Cognitive Workload and Engagement Predict Math Problem Solving Outcomes
Beal, Carole R.; Galan, Federico Cirett
2012-01-01
In the present study, the authors focused on the use of electroencephalography (EEG) data about cognitive workload and sustained attention to predict math problem solving outcomes. EEG data were recorded as students solved a series of easy and difficult math problems. Sequences of attention and cognitive workload estimates derived from the EEG…
Dynamic state estimation and prediction for real-time control and operation
Nguyen, P.H.; Venayagamoorthy, G.K.; Kling, W.L.; Ribeiro, P.F.
2013-01-01
Real-time control and operation are crucial to deal with increasing complexity of modern power systems. To effectively enable those functions, it is required a Dynamic State Estimation (DSE) function to provide accurate network state variables at the right moment and predict their trends ahead. This
Estimation and prediction of convection-diffusion-reaction systems from point measurement
Vries, D.
2008-01-01
Different procedures with respect to estimation and prediction of systems characterized by convection, diffusion and reactions on the basis of point measurement data, have been studied. Two applications of these convection-diffusion-reaction (CDR) systems have been used as a case study of the
Lorente Prieto, Laura; Salanova Soria, Marisa; Martínez Martínez, Isabel M.; Vera Perea, María
2014-01-01
Traditionally, research focussing on psychosocial factors in the construction industry has focused mainly on the negative aspects of health and on results such as occupational accidents. This study, however, focuses on the specific relationships among the different positive psychosocial factors shared by construction workers that could be responsible for occupational well-being and outcomes such as performance. The main objective of this study was to test whether personal resources predict se...
Adaptive Model Predictive Vibration Control of a Cantilever Beam with Real-Time Parameter Estimation
Directory of Open Access Journals (Sweden)
Gergely Takács
2014-01-01
Full Text Available This paper presents an adaptive-predictive vibration control system using extended Kalman filtering for the joint estimation of system states and model parameters. A fixed-free cantilever beam equipped with piezoceramic actuators serves as a test platform to validate the proposed control strategy. Deflection readings taken at the end of the beam have been used to reconstruct the position and velocity information for a second-order state-space model. In addition to the states, the dynamic system has been augmented by the unknown model parameters: stiffness, damping constant, and a voltage/force conversion constant, characterizing the actuating effect of the piezoceramic transducers. The states and parameters of this augmented system have been estimated in real time, using the hybrid extended Kalman filter. The estimated model parameters have been applied to define the continuous state-space model of the vibrating system, which in turn is discretized for the predictive controller. The model predictive control algorithm generates state predictions and dual-mode quadratic cost prediction matrices based on the updated discrete state-space models. The resulting cost function is then minimized using quadratic programming to find the sequence of optimal but constrained control inputs. The proposed active vibration control system is implemented and evaluated experimentally to investigate the viability of the control method.
International Nuclear Information System (INIS)
Koch, J.; Peterson, S-R.
1995-10-01
Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs
Energy Technology Data Exchange (ETDEWEB)
Koch, J. [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center; Peterson, S-R.
1995-10-01
Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs.
Body composition in elderly people: effect of criterion estimates on predictive equations
International Nuclear Information System (INIS)
Baumgartner, R.N.; Heymsfield, S.B.; Lichtman, S.; Wang, J.; Pierson, R.N. Jr.
1991-01-01
The purposes of this study were to determine whether there are significant differences between two- and four-compartment model estimates of body composition, whether these differences are associated with aqueous and mineral fractions of the fat-free mass (FFM); and whether the differences are retained in equations for predicting body composition from anthropometry and bioelectric resistance. Body composition was estimated in 98 men and women aged 65-94 y by using a four-compartment model based on hydrodensitometry, 3 H 2 O dilution, and dual-photon absorptiometry. These estimates were significantly different from those obtained by using Siri's two-compartment model. The differences were associated significantly (P less than 0.0001) with variation in the aqueous fraction of FFM. Equations for predicting body composition from anthropometry and resistance, when calibrated against two-compartment model estimates, retained these systematic errors. Equations predicting body composition in elderly people should be calibrated against estimates from multicompartment models that consider variability in FFM composition
BIM – New rules of measurement ontology for construction cost estimation
Directory of Open Access Journals (Sweden)
F.H. Abanda
2017-04-01
Full Text Available For generations, the process of cost estimation has been manual, time-consuming and error-prone. Emerging Building Information Modelling (BIM can exploit standard measurement methods to automate cost estimation process and improve inaccuracies. Structuring standard measurement methods in an ontologically and machine readable format for a BIM software can greatly facilitate the process of improving inaccuracies in cost estimation. This study explores the development of an ontology based on New Rules of Measurement (NRM for cost estimation during the tendering stages. The methodology adopted is methontology, one of the most widely used ontology engineering methodologies. To ensure the ontology is fit for purpose, cost estimation experts are employed to check the semantics, descriptive logic-based reasoners are used to syntactically check the ontology and a leading 4D BIM modelling software is used on a case study building to test/validate the proposed ontology.
Energy Technology Data Exchange (ETDEWEB)
Elektorowicz, M. [Concordia Univ., Building, Civil and Environmental Engineering, Montreal, Quebec (Canada)]. E-mail: mariae@civil.concordia.ca; Balanzinski, M. [Ecole Polytechnique de Montreal, Mechnical Engineering, Montreal, Quebec (Canada); Qasaimeh, A. [Concordia Univ., Building, Civil and Environmental Engineering, Montreal, Quebec (Canada)
2002-06-15
Current design approaches lack essential parameters necessary to evaluate the removal of metals contained in wastewater which is discharged to constructed wetlands. As a result, there is no guideline for an accurate design of constructed wetlands. An artificial intelligence approach was used to assess constructed wetland design. For this purpose concentrations of bioavailable mercury were evaluated in conditions where initial concentrations of inorganic mercury, chloride concentrations and pH values changed. Fuzzy knowledge base was built based on results obtained from previous investigations performed in a greenhouse for floating plants, and from computations for mercury speciation. The Fuzzy Decision Support System (FDSS) used the knowledge base to find parameters that permit to generate the highest amount of mercury available for plants. The findings of this research can be applied to wetlands and all natural processes where correlations between them are uncertain. (author)
International Nuclear Information System (INIS)
Elektorowicz, M.; Balanzinski, M.; Qasaimeh, A.
2002-01-01
Current design approaches lack essential parameters necessary to evaluate the removal of metals contained in wastewater which is discharged to constructed wetlands. As a result, there is no guideline for an accurate design of constructed wetlands. An artificial intelligence approach was used to assess constructed wetland design. For this purpose concentrations of bioavailable mercury were evaluated in conditions where initial concentrations of inorganic mercury, chloride concentrations and pH values changed. Fuzzy knowledge base was built based on results obtained from previous investigations performed in a greenhouse for floating plants, and from computations for mercury speciation. The Fuzzy Decision Support System (FDSS) used the knowledge base to find parameters that permit to generate the highest amount of mercury available for plants. The findings of this research can be applied to wetlands and all natural processes where correlations between them are uncertain. (author)
DEFF Research Database (Denmark)
Casas, Isabel; Mao, Xiuping; Veiga, Helena
This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...
Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models
Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.
2011-01-01
Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.
Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.
2012-01-01
Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.
Li, Longhai; Feng, Cindy X; Qiu, Shi
2017-06-30
An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Bai, Wenming; Yoshimura, Norio; Takayanagi, Masao; Che, Jingai; Horiuchi, Naomi; Ogiwara, Isao
2016-06-28
Nondestructive prediction of ingredient contents of farm products is useful to ship and sell the products with guaranteed qualities. Here, near-infrared spectroscopy is used to predict nondestructively total sugar, total organic acid, and total anthocyanin content in each blueberry. The technique is expected to enable the selection of only delicious blueberries from all harvested ones. The near-infrared absorption spectra of blueberries are measured with the diffuse reflectance mode at the positions not on the calyx. The ingredient contents of a blueberry determined by high-performance liquid chromatography are used to construct models to predict the ingredient contents from observed spectra. Partial least squares regression is used for the construction of the models. It is necessary to properly select the pretreatments for the observed spectra and the wavelength regions of the spectra used for analyses. Validations are necessary for the constructed models to confirm that the ingredient contents are predicted with practical accuracies. Here we present a protocol to construct and validate the models for nondestructive prediction of ingredient contents in blueberries by near-infrared spectroscopy.
Forkmann, Thomas; Teismann, Tobias; Stenzel, Jana-Sophie; Glaesmer, Heide; de Beurs, Derek
2018-01-25
Defeat and entrapment have been shown to be of central relevance to the development of different disorders. However, it remains unclear whether they represent two distinct constructs or one overall latent variable. One reason for the unclarity is that traditional factor analytic techniques have trouble estimating the right number of clusters in highly correlated data. In this study, we applied a novel approach based on network analysis that can deal with correlated data to establish whether defeat and entrapment are best thought of as one or multiple constructs. Explanatory graph analysis was used to estimate the number of dimensions within the 32 items that make up the defeat and entrapment scales in two samples: an online community sample of 480 participants, and a clinical sample of 147 inpatients admitted to a psychiatric hospital after a suicidal attempt or severe suicidal crisis. Confirmatory Factor analysis (CFA) was used to test whether the proposed structure fits the data. In both samples, bootstrapped exploratory graph analysis suggested that the defeat and entrapment items belonged to different dimensions. Within the entrapment items, two separate dimensions were detected, labelled internal and external entrapment. Defeat appeared to be multifaceted only in the online sample. When comparing the CFA outcomes of the one, two, three and four factor models, the one factor model was preferred. Defeat and entrapment can be viewed as distinct, yet, highly associated constructs. Thus, although replication is needed, results are in line with theories differentiating between these two constructs.
Directory of Open Access Journals (Sweden)
Cecilia de Almeida Marques-Toledo
2017-07-01
Full Text Available Infectious diseases are a leading threat to public health. Accurate and timely monitoring of disease risk and progress can reduce their impact. Mentioning a disease in social networks is correlated with physician visits by patients, and can be used to estimate disease activity. Dengue is the fastest growing mosquito-borne viral disease, with an estimated annual incidence of 390 million infections, of which 96 million manifest clinically. Dengue burden is likely to increase in the future owing to trends toward increased urbanization, scarce water supplies and, possibly, environmental change. The epidemiological dynamic of Dengue is complex and difficult to predict, partly due to costly and slow surveillance systems.In this study, we aimed to quantitatively assess the usefulness of data acquired by Twitter for the early detection and monitoring of Dengue epidemics, both at country and city level at a weekly basis. Here, we evaluated and demonstrated the potential of tweets modeling for Dengue estimation and forecast, in comparison with other available web-based data, Google Trends and Wikipedia access logs. Also, we studied the factors that might influence the goodness-of-fit of the model. We built a simple model based on tweets that was able to 'nowcast', i.e. estimate disease numbers in the same week, but also 'forecast' disease in future weeks. At the country level, tweets are strongly associated with Dengue cases, and can estimate present and future Dengue cases until 8 weeks in advance. At city level, tweets are also useful for estimating Dengue activity. Our model can be applied successfully to small and less developed cities, suggesting a robust construction, even though it may be influenced by the incidence of the disease, the activity of Twitter locally, and social factors, including human development index and internet access.Tweets association with Dengue cases is valuable to assist traditional Dengue surveillance at real-time and low
Marques-Toledo, Cecilia de Almeida; Degener, Carolin Marlen; Vinhal, Livia; Coelho, Giovanini; Meira, Wagner; Codeço, Claudia Torres; Teixeira, Mauro Martins
2017-07-01
Infectious diseases are a leading threat to public health. Accurate and timely monitoring of disease risk and progress can reduce their impact. Mentioning a disease in social networks is correlated with physician visits by patients, and can be used to estimate disease activity. Dengue is the fastest growing mosquito-borne viral disease, with an estimated annual incidence of 390 million infections, of which 96 million manifest clinically. Dengue burden is likely to increase in the future owing to trends toward increased urbanization, scarce water supplies and, possibly, environmental change. The epidemiological dynamic of Dengue is complex and difficult to predict, partly due to costly and slow surveillance systems. In this study, we aimed to quantitatively assess the usefulness of data acquired by Twitter for the early detection and monitoring of Dengue epidemics, both at country and city level at a weekly basis. Here, we evaluated and demonstrated the potential of tweets modeling for Dengue estimation and forecast, in comparison with other available web-based data, Google Trends and Wikipedia access logs. Also, we studied the factors that might influence the goodness-of-fit of the model. We built a simple model based on tweets that was able to 'nowcast', i.e. estimate disease numbers in the same week, but also 'forecast' disease in future weeks. At the country level, tweets are strongly associated with Dengue cases, and can estimate present and future Dengue cases until 8 weeks in advance. At city level, tweets are also useful for estimating Dengue activity. Our model can be applied successfully to small and less developed cities, suggesting a robust construction, even though it may be influenced by the incidence of the disease, the activity of Twitter locally, and social factors, including human development index and internet access. Tweets association with Dengue cases is valuable to assist traditional Dengue surveillance at real-time and low-cost. Tweets are
Directory of Open Access Journals (Sweden)
E. Yu. Antipenko
2010-03-01
Full Text Available In the article the structural analysis of efficiency indices elements of organization-and-technology solutions of construction project scheduling is executed for preparation of high-quality base of providing the planning processes and subsequent realization of the projects.
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
Energy Technology Data Exchange (ETDEWEB)
Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in
2016-09-07
Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.
Number Line Estimation Predicts Mathematical Skills: Difference in Grades 2 and 4.
Zhu, Meixia; Cai, Dan; Leung, Ada W S
2017-01-01
Studies have shown that number line estimation is important for learning. However, it is yet unclear if number line estimation predicts different mathematical skills in different grades after controlling for age, non-verbal cognitive ability, attention, and working memory. The purpose of this study was to examine the role of number line estimation on two mathematical skills (calculation fluency and math problem-solving) in grade 2 and grade 4. One hundred and forty-eight children from Shanghai, China were assessed on measures of number line estimation, non-verbal cognitive ability (non-verbal matrices), working memory (N-back), attention (expressive attention), and mathematical skills (calculation fluency and math problem-solving). The results showed that in grade 2, number line estimation correlated significantly with calculation fluency ( r = -0.27, p problem-solving ( r = -0.52, p problem-solving ( r = -0.38, p problem-solving (12.0%) and calculation fluency (4.0%) after controlling for the effects of age, non-verbal cognitive ability, attention, and working memory. In grade 4, number line estimation accounted for unique variance in math problem-solving (9.0%) but not in calculation fluency. These findings suggested that number line estimation had an important role in math problem-solving for both grades 2 and 4 children and in calculation fluency for grade 2 children. We concluded that number line estimation could be a useful indicator for teachers to identify and improve children's mathematical skills.
International Nuclear Information System (INIS)
Riordan, B.J.
1986-03-01
This report develops quantitative labor productivity adjustment factors for the performance of regulatory impact analyses (RIAs). These factors will allow analysts to modify ''new construction'' labor costs to account for changes in labor productivity due to differing work environments at operating reactors and at reactors with construction in progress. The technique developed in this paper relies on the Energy Economic Data Base (EEDB) for baseline estimates of the direct labor hours and/or labor costs required to perform specific tasks in a new construction environment. The labor productivity cost factors adjust for constraining conditions such as working in a radiation environment, poor access, congestion and interference, etc., which typically occur on construction tasks at operating reactors and can occur under certain circumstances at reactors under construction. While the results do not portray all aspects of labor productivity, they encompass the major work place conditions generally discernible by the NRC analysts and assign values that appear to be reasonable within the context of industry experience. 18 refs
International Nuclear Information System (INIS)
Trapero, Juan R.
2016-01-01
In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.
International Nuclear Information System (INIS)
Tatebe, Yasumasa; Yoshida, Yoshitaka
2012-01-01
If an emergency event occurs in a nuclear power plant, appropriate action is selected and taken in accordance with the plant status, which changes from time to time, in order to prevent escalation and mitigate the event consequences. It is thus important to predict the event sequence and identify the plant behavior resulting from the action taken. In predicting the event sequence during a loss-of-coolant accident (LOCA), it is necessary to estimate break diameter. The conventional method for this estimation is time-consuming, since it involves multiple sensitivity analyses to determine the break diameter that is consistent with the plant behavior. To speed up the process of predicting the nuclear emergency event sequence, a new break diameter estimation technique that is applicable to pressurized water reactors was developed in this study. This technique enables the estimation of break diameter using the plant data sent from the safety parameter display system (SPDS), with focus on the depressurization rate in the reactor cooling system (RCS) during LOCA. The results of LOCA analysis, performed by varying the break diameter using the MAAP4 and RELAP5/MOD3.2 codes, confirmed that the RCS depressurization rate could be expressed by the log linear function of break diameter, except in the case of a small leak, in which RCS depressurization is affected by the coolant charging system and the high-pressure injection system. A correlation equation for break diameter estimation was developed from this function and tested for accuracy. Testing verified that the correlation equation could estimate break diameter accurately within an error of approximately 16%, even if the leak increases gradually, changing the plant status. (author)
Directory of Open Access Journals (Sweden)
Annegret Grimm
Full Text Available Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK. If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2. Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to
Robinson, Cecil; Rose, Sage
2010-01-01
One leading version of hope theory posits hope to be a general disposition for goal-directed agency and pathways thinking. Domain-specific hope theory suggests that hope operates within context and measures of hope should reflect that context. This study examined three measures of hope to test the predictive, construct, and convergent validity…
2001-03-01
A real-time travel time prediction system (TIPS) was evaluated in a construction work zone. TIPS includes changeable message signs (CMSs) displaying the travel time and distance to the end of the work zone to motorists. The travel times displayed by ...
2001-03-01
A real-time travel time prediction system (TIPS) was evaluated in a construction work : zone. TIPS includes changeable message signs (CMSs) displaying the travel time and : distance to the end of the work zone to motorists. The travel times displayed...
IN-CYLINDER MASS FLOW ESTIMATION AND MANIFOLD PRESSURE DYNAMICS FOR STATE PREDICTION IN SI ENGINES
Directory of Open Access Journals (Sweden)
Wojnar Sławomir
2014-06-01
Full Text Available The aim of this paper is to present a simple model of the intake manifold dynamics of a spark ignition (SI engine and its possible application for estimation and control purposes. We focus on pressure dynamics, which may be regarded as the foundation for estimating future states and for designing model predictive control strategies suitable for maintaining the desired air fuel ratio (AFR. The flow rate measured at the inlet of the intake manifold and the in-cylinder flow estimation are considered as parts of the proposed model. In-cylinder flow estimation is crucial for engine control, where an accurate amount of aspired air forms the basis for computing the manipulated variables. The solutions presented here are based on the mean value engine model (MVEM approach, using the speed-density method. The proposed in-cylinder flow estimation method is compared to measured values in an experimental setting, while one-step-ahead prediction is illustrated using simulation results.
Number Line Estimation Predicts Mathematical Skills: Difference in Grades 2 and 4
Directory of Open Access Journals (Sweden)
Meixia Zhu
2017-09-01
Full Text Available Studies have shown that number line estimation is important for learning. However, it is yet unclear if number line estimation predicts different mathematical skills in different grades after controlling for age, non-verbal cognitive ability, attention, and working memory. The purpose of this study was to examine the role of number line estimation on two mathematical skills (calculation fluency and math problem-solving in grade 2 and grade 4. One hundred and forty-eight children from Shanghai, China were assessed on measures of number line estimation, non-verbal cognitive ability (non-verbal matrices, working memory (N-back, attention (expressive attention, and mathematical skills (calculation fluency and math problem-solving. The results showed that in grade 2, number line estimation correlated significantly with calculation fluency (r = -0.27, p < 0.05 and math problem-solving (r = -0.52, p < 0.01. In grade 4, number line estimation correlated significantly with math problem-solving (r = -0.38, p < 0.01, but not with calculation fluency. Regression analyses indicated that in grade 2, number line estimation accounted for unique variance in math problem-solving (12.0% and calculation fluency (4.0% after controlling for the effects of age, non-verbal cognitive ability, attention, and working memory. In grade 4, number line estimation accounted for unique variance in math problem-solving (9.0% but not in calculation fluency. These findings suggested that number line estimation had an important role in math problem-solving for both grades 2 and 4 children and in calculation fluency for grade 2 children. We concluded that number line estimation could be a useful indicator for teachers to identify and improve children’s mathematical skills.
Designing and Constructing Blood Flow Monitoring System to Predict Pressure Ulcers on Heel
Directory of Open Access Journals (Sweden)
Akbari H.
2014-06-01
Full Text Available Background: A pressure ulcer is a complication related to the need for the care and treatment of primarily disabled and elderly people. With the decrease of the blood flow caused by the pressure loaded, ulcers are formed and the tissue will be wasted with the passage of time. Objective: The aim of this study was to construct blood flow monitoring system on the heel tissue which was under external pressure in order to evaluate the tissue treatment in the ulcer. Methods: To measure the blood flow changes, three infrared optical transmitters were used at the distances of 5, 10, and 15 mm to the receiver. Blood flow changes in heels were assessed in pressures 0, 30, and 60 mmHg. The time features were extracted for analysis from the recorded signal by MATLAB software. Changes of the time features under different pressures were evaluated at the three distances by ANOVA in SPSS software. The level of significance was considered at 0.05. Results: In this study, 15 subjects, including both male and female, with the mean age of 54±7 participated. The results showed that the signal amplitude, power and absolute signal decreased significantly when pressure on the tissue increased in different layers (p<0.05. Heart rate only decreased significantly in pressures more than 30 mmHg (p=0.02. In pressures more than 30 mmHg, in addition to a decrease in the time features, the pattern of blood flow signal changed and it wasn’t the same as noload signal. Conclusion: By detecting the time features, we can reach an early diagnosis to prognosticate the degeneration of the tissue under pressure and it can be recommended as a method to predict bedsores in the heel.
Meta-analysis of choice set generation effects on route choice model estimates and predictions
DEFF Research Database (Denmark)
Prato, Carlo Giacomo
2012-01-01
are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...
DEFF Research Database (Denmark)
Petersen, Lars Norbert; Jørgensen, John Bagterp; Rawlings, James B.
2015-01-01
In this paper, we develop an economically optimizing Nonlinear Model Predictive Controller (E-NMPC) for a complete spray drying plant with multiple stages. In the E-NMPC the initial state is estimated by an extended Kalman Filter (EKF) with noise covariances estimated by an autocovariance least...... squares method (ALS). We present a model for the spray drying plant and use this model for simulation as well as for prediction in the E-NMPC. The open-loop optimal control problem in the E-NMPC is solved using the single-shooting method combined with a quasi-Newton Sequential Quadratic programming (SQP......) algorithm and the adjoint method for computation of gradients. We evaluate the economic performance when unmeasured disturbances are present. By simulation, we demonstrate that the E-NMPC improves the profit of spray drying by 17% compared to conventional PI control....
International Nuclear Information System (INIS)
Haven, Kyle; Majda, Andrew; Abramov, Rafail
2005-01-01
Many situations in complex systems require quantitative estimates of the lack of information in one probability distribution relative to another. In short term climate and weather prediction, examples of these issues might involve the lack of information in the historical climate record compared with an ensemble prediction, or the lack of information in a particular Gaussian ensemble prediction strategy involving the first and second moments compared with the non-Gaussian ensemble itself. The relative entropy is a natural way to quantify the predictive utility in this information, and recently a systematic computationally feasible hierarchical framework has been developed. In practical systems with many degrees of freedom, computational overhead limits ensemble predictions to relatively small sample sizes. Here the notion of predictive utility, in a relative entropy framework, is extended to small random samples by the definition of a sample utility, a measure of the unlikeliness that a random sample was produced by a given prediction strategy. The sample utility is the minimum predictability, with a statistical level of confidence, which is implied by the data. Two practical algorithms for measuring such a sample utility are developed here. The first technique is based on the statistical method of null-hypothesis testing, while the second is based upon a central limit theorem for the relative entropy of moment-based probability densities. These techniques are tested on known probability densities with parameterized bimodality and skewness, and then applied to the Lorenz '96 model, a recently developed 'toy' climate model with chaotic dynamics mimicking the atmosphere. The results show a detection of non-Gaussian tendencies of prediction densities at small ensemble sizes with between 50 and 100 members, with a 95% confidence level
Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes
Corradi, Valentina; Swanson, Norman R.
2005-01-01
Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting amongst multiple alternative forecasting models, all of which are possibl...
Disturbance estimator based predictive current control of grid-connected inverters
Al-Khafaji, Ahmed Samawi Ghthwan
2013-01-01
ABSTRACT: The work presented in my thesis considers one of the modern discrete-time control approaches based on digital signal processing methods, that have been developed to improve the performance control of grid-connected three-phase inverters. Disturbance estimator based predictive current control of grid-connected inverters is proposed. For inverter modeling with respect to the design of current controllers, we choose the d-q synchronous reference frame to make it easier to understand an...
International Nuclear Information System (INIS)
Gershgorin, B.; Harlim, J.; Majda, A.J.
2010-01-01
The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates
Energy Technology Data Exchange (ETDEWEB)
De la Roza-Delgado, B.; Modroño, S.; Vicente, F.; Martínez-Fernández, A.; Soldado, A.
2015-07-01
A total of 220 faecal pig and poultry samples, collected from different experimental trials were employed with the aim to demonstrate the suitability of Near Infrared Reflectance Spectroscopy (NIRS) technology for estimation of gross calorific value on faeces as output products in energy balances studies. NIR spectra from dried and grounded faeces samples were analyzed using a Foss NIRSystem 6500 instrument, scanning over the wavelength range 400-2500 nm. Validation studies for quantitative analytical models were carried out to estimate the relevance of method performance associated to reference values to obtain an appropriate, accuracy and precision. The results for prediction of gross calorific value (GCV) of NIRS calibrations obtained for individual species showed high correlation coefficients comparing chemical analysis and NIRS predictions, ranged from 0.92 to 0.97 for poultry and pig. For external validation, the ratio between the standard error of cross validation (SECV) and the standard error of prediction (SEP) varied between 0.73 and 0.86 for poultry and pig respectively, indicating a sufficiently precision of calibrations. In addition a global model to estimate GCV in both species was developed and externally validated. It showed correlation coefficients of 0.99 for calibration, 0.98 for cross-validation and 0.97 for external validation. Finally, relative uncertainty was calculated for NIRS developed prediction models with the final value when applying individual NIRS species model of 1.3% and 1.5% for NIRS global prediction. This study suggests that NIRS is a suitable and accurate method for the determination of GCV in faeces, decreasing cost, timeless and for convenient handling of unpleasant samples.. (Author)
Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea
2016-08-11
Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.
Prediction of Monte Carlo errors by a theory generalized to treat track-length estimators
International Nuclear Information System (INIS)
Booth, T.E.; Amster, H.J.
1978-01-01
Present theories for predicting expected Monte Carlo errors in neutron transport calculations apply to estimates of flux-weighted integrals sampled directly by scoring individual collisions. To treat track-length estimators, the recent theory of Amster and Djomehri is generalized to allow the score distribution functions to depend on the coordinates of two successive collisions. It has long been known that the expected track length in a region of phase space equals the expected flux integrated over that region, but that the expected statistical error of the Monte Carlo estimate of the track length is different from that of the flux integral obtained by sampling the sum of the reciprocals of the cross sections for all collisions in the region. These conclusions are shown to be implied by the generalized theory, which provides explicit equations for the expected values and errors of both types of estimators. Sampling expected contributions to the track-length estimator is also treated. Other general properties of the errors for both estimators are derived from the equations and physically interpreted. The actual values of these errors are then obtained and interpreted for a simple specific example
Directory of Open Access Journals (Sweden)
Evanthia E. Tripoliti
Full Text Available Heart failure is a serious condition with high prevalence (about 2% in the adult population in developed countries, and more than 8% in patients older than 75 years. About 3–5% of hospital admissions are linked with heart failure incidents. Heart failure is the first cause of admission by healthcare professionals in their clinical practice. The costs are very high, reaching up to 2% of the total health costs in the developed countries. Building an effective disease management strategy requires analysis of large amount of data, early detection of the disease, assessment of the severity and early prediction of adverse events. This will inhibit the progression of the disease, will improve the quality of life of the patients and will reduce the associated medical costs. Toward this direction machine learning techniques have been employed. The aim of this paper is to present the state-of-the-art of the machine learning methodologies applied for the assessment of heart failure. More specifically, models predicting the presence, estimating the subtype, assessing the severity of heart failure and predicting the presence of adverse events, such as destabilizations, re-hospitalizations, and mortality are presented. According to the authors' knowledge, it is the first time that such a comprehensive review, focusing on all aspects of the management of heart failure, is presented. Keywords: Heart failure, Diagnosis, Prediction, Severity estimation, Classification, Data mining
ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction
International Nuclear Information System (INIS)
Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.
2015-01-01
An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks
Application of an estimation model to predict future transients at US nuclear power plants
International Nuclear Information System (INIS)
Hallbert, B.P.; Blackman, H.S.
1987-01-01
A model developed by R.A. Fisher was applied to a set of Licensee Event Reports (LERs) summarizing transient initiating events at US commercial nuclear power plants. The empirical Bayes model was examined to study the feasibility of estimating the number of categories of transients which have not yet occurred at nuclear power plants. An examination of the model's predictive ability using an existing sample of data provided support for use of the model to estimate future transients. The estimate indicates that an approximate fifteen percent increase in the number of categories of transient initiating events may be expected during the period 1983--1993, assuming a stable process of transients. Limitations of the model and other possible applications are discussed. 10 refs., 1 fig., 3 tabs
Paramonov, V V
2004-01-01
The requirements to the cells manufacturing precision and tining in the multi-cells accelerating structures construction came from the required accelerating field uniformity, based on the beam dynamics demands. The standard deviation of the field distribution depends on accelerating and coupling modes frequencies deviations, stop-band width and coupling coefficient deviations. These deviations can be determined from 3D fields distribution for accelerating and coupling modes and the cells surface displacements. With modern software it can be done separately for every specified part of the cell surface. Finally, the cell surface displacements are defined from the cell dimensions deviations. This technique allows both to define qualitatively the critical regions and to optimize quantitatively the tolerances definition.
Energy Technology Data Exchange (ETDEWEB)
Luukkonen, A.; Korkealaakso, J.; Pitkaenen, P. [VTT Communities and Infrastructure, Espoo (Finland)
1997-11-01
Teollisuuden Voima Oy selected five investigation areas for preliminary site studies (1987Ae1992). The more detailed site investigation project, launched at the beginning of 1993 and presently supervised by Posiva Oy, is concentrated to three investigation areas. Romuvaara at Kuhmo is one of the present target areas, and the geochemical, structural and hydrological data used in this study are extracted from there. The aim of the study is to develop suitable methods for groundwater composition estimation based on a group of known hydrogeological variables. The input variables used are related to the host type of groundwater, hydrological conditions around the host location, mixing potentials between different types of groundwater, and minerals equilibrated with the groundwater. The output variables are electrical conductivity, Ca, Mg, Mn, Na, K, Fe, Cl, S, HS, SO{sub 4}, alkalinity, {sup 3}H, {sup 14}C, {sup 13}C, Al, Sr, F, Br and I concentrations, and pH of the groundwater. The methodology is to associate the known hydrogeological conditions (i.e. input variables), with the known water compositions (output variables), and to evaluate mathematical relations between these groups. Output estimations are done with two separate procedures: partial least squares regressions on the principal components of input variables, and by training neural networks with input-output pairs. Coefficients of linear equations and trained networks are optional methods for actual predictions. The quality of output predictions are monitored with confidence limit estimations, evaluated from input variable covariances and output variances, and with charge balance calculations. Groundwater compositions in Romuvaara borehole KR10 are predicted at 10 metre intervals with both prediction methods. 46 refs.
de Graaf-Ruizendaal, Willemijn A; de Bakker, Dinny H
2013-10-27
This study addresses the growing academic and policy interest in the appropriate provision of local healthcare services to the healthcare needs of local populations to increase health status and decrease healthcare costs. However, for most local areas information on the demand for primary care and supply is missing. The research goal is to examine the construction of a decision tool which enables healthcare planners to analyse local supply and demand in order to arrive at a better match. National sample-based medical record data of general practitioners (GPs) were used to predict the local demand for GP care based on local populations using a synthetic estimation technique. Next, the surplus or deficit in local GP supply were calculated using the national GP registry. Subsequently, a dynamic internet tool was built to present demand, supply and the confrontation between supply and demand regarding GP care for local areas and their surroundings in the Netherlands. Regression analysis showed a significant relationship between sociodemographic predictors of postcode areas and GP consultation time (F [14, 269,467] = 2,852.24; P 1,000 inhabitants in the Netherlands covering 97% of the total population. Confronting these estimated demand figures with the actual GP supply resulted in the average GP workload and the number of full-time equivalent (FTE) GP too much/too few for local areas to cover the demand for GP care. An estimated shortage of one FTE GP or more was prevalent in about 19% of the postcode areas with >1,000 inhabitants if the surrounding postcode areas were taken into consideration. Underserved areas were mainly found in rural regions. The constructed decision tool is freely accessible on the Internet and can be used as a starting point in the discussion on primary care service provision in local communities and it can make a considerable contribution to a primary care system which provides care when and where people need it.
Estimating greenhouse gas fluxes from constructed wetlands used for water quality improvement
Directory of Open Access Journals (Sweden)
Sukanda Chuersuwan
2014-06-01
Full Text Available Methane (CH4 , nitrous oxide (N2O and carbon dioxide (CO2 fluxes were evaluated from constructed wetlands (CWs used to improve domestic wastewater quality. Experiments employed subsurface flow (SF and free water surface flow (FWS CWs planted with Cyperus spp. Results showed seasonal fluctuations of greenhouse gas fluxes. Greenhouse gas fluxes from SF-CWs and FWS-CWS were significantly different (p<0.05 while pollutant removal efficiencies of both CWs were not significantly different. The average CH4 , N2O and CO2 fluxes from SF-CWs were 2.9±3.5, 1.0±1.7, and 15.2±12.3 mg/m2 /hr, respectively, corresponding to the average global warming potential (GWP of 392 mg CO2 equivalents/m2 /hr. For FWS-CWs, the average CH4 , N2O and CO2 fluxes were 5.9±4.8, 1.8±1.0, and 29.6±20.2 mg/m2 /hr, respectively, having an average GWP of 698 mg CO2 equivalents/m2 /hr. Thus, FWS-CWs have a higher GWP than SF-CWs when they were used as a system for domestic water improvement.
Role of Target Indicators in Determination of Prognostic Estimates for the Construction Industry
Directory of Open Access Journals (Sweden)
Zalunina Olha M.
2014-03-01
Full Text Available The article considers interrelation of planning and forecasting in the construction industry. It justifies a need of determining key indicators for specific conditions of formation of the market model of development of economy, inconstant volumes of production in industry, absence of required volumes of investments for technical re-equipment of the branch, absence of sufficient volumes of own primary energy carriers, sharp growth of prices on imported energy carriers, absence of the modern system of tariffs on electric energy, and inefficiency of energy saving measures. The article offers to form key indicators on the basis of a factor analysis, which envisages stage-by-stage transformation of the matrix of original data with the result of “compression” of information. This allows identification of the most significant properties that influence economic state of the region under conditions of use of minimum of original information. The article forms key target indicators of the energy sector for the Poltava oblast. It calculates, using the proposed method, prognostic values of key indicators of territorial functioning for the Poltava oblast.
International Nuclear Information System (INIS)
Sun Wenjuan; Xie Tianwu; Liu Qian; Jia Xianghong; Xu Feng
2013-01-01
With the rapid development of China's space industry, the importance of radiation protection is increasingly prominent. To provide relevant dose data, we first developed the Visible Chinese Human adult Female (VCH-F) phantom, and performed further modifications to generate the VCH-F Astronaut (VCH-FA) phantom, incorporating statistical body characteristics data from the first batch of Chinese female astronauts as well as reference organ mass data from the International Commission on Radiological Protection (ICRP; both within 1% relative error). Based on cryosection images, the original phantom was constructed via Non-Uniform Rational B-Spline (NURBS) boundary surfaces to strengthen the deformability for fitting the body parameters of Chinese female astronauts. The VCH-FA phantom was voxelized at a resolution of 2 x 2 x 4 mm 3 for radioactive particle transport simulations from isotropic protons with energies of 5000 - 10 000 MeV in Monte Carlo N-Particle eXtended (MCNPX) code. To investigate discrepancies caused by anatomical variations and other factors, the obtained doses were compared with corresponding values from other phantoms and sex-averaged doses. Dose differences were observed among phantom calculation results, especially for effective dose with low-energy protons. Local skin thickness shifts the breast dose curve toward high energy, but has little impact on inner organs. Under a shielding layer, organ dose reduction is greater for skin than for other organs. The calculated skin dose per day closely approximates measurement data obtained in low-Earth orbit (LEO). (author)
A NEW METHOD FOR PREDICTING SURVIVAL AND ESTIMATING UNCERTAINTY IN TRAUMA PATIENTS
Directory of Open Access Journals (Sweden)
V. G. Schetinin
2017-01-01
Full Text Available The Trauma and Injury Severity Score (TRISS is the current “gold” standard of screening patient’s condition for purposes of predicting survival probability. More than 40 years of TRISS practice revealed a number of problems, particularly, 1 unexplained fluctuation of predicted values caused by aggregation of screening tests, and 2 low accuracy of uncertainty intervals estimations. We developed a new method made it available for practitioners as a web calculator to reduce negative effect of factors given above. The method involves Bayesian methodology of statistical inference which, being computationally expensive, in theory provides most accurate predictions. We implemented and tested this approach on a data set including 571,148 patients registered in the US National Trauma Data Bank (NTDB with 1–20 injuries. These patients were distributed over the following categories: (1 174,647 with 1 injury, (2 381,137 with 2–10 injuries, and (3 15,364 with 11–20 injuries. Survival rates in each category were 0.977, 0.953, and 0.831, respectively. The proposed method has improved prediction accuracy by 0.04%, 0.36%, and 3.64% (p-value <0.05 in the categories 1, 2, and 3, respectively. Hosmer-Lemeshow statistics showed a significant improvement of the new model calibration. The uncertainty 2σ intervals were reduced from 0.628 to 0.569 for patients of the second category and from 1.227 to 0.930 for patients of the third category, both with p-value <0.005. The new method shows the statistically significant improvement (p-value <0.05 in accuracy of predicting survival and estimating the uncertainty intervals. The largest improvement has been achieved for patients with 11–20 injuries. The method is available for practitioners as a web calculator http://www.traumacalc.org.
Ummin, Okumura; Tian, Han; Zhu, Haiyu; Liu, Fuqiang
2018-03-01
Construction safety has always been the first priority in construction process. The common safety problem is the instability of the template support. In order to solve this problem, the digital image measurement technology has been contrived to support real-time monitoring system which can be triggered if the deformation value exceed the specified range. Thus the economic loss could be reduced to the lowest level.
Directory of Open Access Journals (Sweden)
Pouchly V.
2012-01-01
Full Text Available The sintering is a complex thermally activated process, thus any prediction of sintering behaviour is very welcome not only for industrial purposes. Presented paper shows the possibility of densification prediction based on concept of Master Sintering Surface (MSS for pressure assisted Spark Plasma Sintering (SPS. User friendly software for evaluation of the MSS is presented. The concept was used for densification prediction of alumina ceramics sintered by SPS.
Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation
Ekin Aydin, Boran; Rutten, Martine
2016-04-01
Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.
A Bayesian approach for parameter estimation and prediction using a computationally intensive model
International Nuclear Information System (INIS)
Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M
2015-01-01
Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)
Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.
2013-01-01
Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches
PREVAIL: Predicting Recovery through Estimation and Visualization of Active and Incident Lesions.
Dworkin, Jordan D; Sweeney, Elizabeth M; Schindler, Matthew K; Chahin, Salim; Reich, Daniel S; Shinohara, Russell T
2016-01-01
The goal of this study was to develop a model that integrates imaging and clinical information observed at lesion incidence for predicting the recovery of white matter lesions in multiple sclerosis (MS) patients. Demographic, clinical, and magnetic resonance imaging (MRI) data were obtained from 60 subjects with MS as part of a natural history study at the National Institute of Neurological Disorders and Stroke. A total of 401 lesions met the inclusion criteria and were used in the study. Imaging features were extracted from the intensity-normalized T1-weighted (T1w) and T2-weighted sequences as well as magnetization transfer ratio (MTR) sequence acquired at lesion incidence. T1w and MTR signatures were also extracted from images acquired one-year post-incidence. Imaging features were integrated with clinical and demographic data observed at lesion incidence to create statistical prediction models for long-term damage within the lesion. The performance of the T1w and MTR predictions was assessed in two ways: first, the predictive accuracy was measured quantitatively using leave-one-lesion-out cross-validated (CV) mean-squared predictive error. Then, to assess the prediction performance from the perspective of expert clinicians, three board-certified MS clinicians were asked to individually score how similar the CV model-predicted one-year appearance was to the true one-year appearance for a random sample of 100 lesions. The cross-validated root-mean-square predictive error was 0.95 for normalized T1w and 0.064 for MTR, compared to the estimated measurement errors of 0.48 and 0.078 respectively. The three expert raters agreed that T1w and MTR predictions closely resembled the true one-year follow-up appearance of the lesions in both degree and pattern of recovery within lesions. This study demonstrates that by using only information from a single visit at incidence, we can predict how a new lesion will recover using relatively simple statistical techniques. The
Sun, Wenjuan; JIA, Xianghong; XIE, Tianwu; XU, Feng; LIU, Qian
2013-01-01
With the rapid development of China's space industry, the importance of radiation protection is increasingly prominent. To provide relevant dose data, we first developed the Visible Chinese Human adult Female (VCH-F) phantom, and performed further modifications to generate the VCH-F Astronaut (VCH-FA) phantom, incorporating statistical body characteristics data from the first batch of Chinese female astronauts as well as reference organ mass data from the International Commission on Radiological Protection (ICRP; both within 1% relative error). Based on cryosection images, the original phantom was constructed via Non-Uniform Rational B-Spline (NURBS) boundary surfaces to strengthen the deformability for fitting the body parameters of Chinese female astronauts. The VCH-FA phantom was voxelized at a resolution of 2 × 2 × 4 mm3for radioactive particle transport simulations from isotropic protons with energies of 5000–10 000 MeV in Monte Carlo N-Particle eXtended (MCNPX) code. To investigate discrepancies caused by anatomical variations and other factors, the obtained doses were compared with corresponding values from other phantoms and sex-averaged doses. Dose differences were observed among phantom calculation results, especially for effective dose with low-energy protons. Local skin thickness shifts the breast dose curve toward high energy, but has little impact on inner organs. Under a shielding layer, organ dose reduction is greater for skin than for other organs. The calculated skin dose per day closely approximates measurement data obtained in low-Earth orbit (LEO). PMID:23135158
Assouline, Dan; Mohajeri, Nahid; Scartezzini, Jean-Louis
2017-04-01
Solar energy is clean, widely available, and arguably the most promising renewable energy resource. Taking full advantage of solar power, however, requires a deep understanding of its patterns and dependencies in space and time. The recent advances in Machine Learning brought powerful algorithms to estimate the spatio-temporal variations of solar irradiance (the power per unit area received from the Sun, W/m2), using local weather and terrain information. Such algorithms include Deep Learning (e.g. Artificial Neural Networks), or kernel methods (e.g. Support Vector Machines). However, most of these methods have some disadvantages, as they: (i) are complex to tune, (ii) are mainly used as a black box and offering no interpretation on the variables contributions, (iii) often do not provide uncertainty predictions (Assouline et al., 2016). To provide a reasonable solar mapping with good accuracy, these gaps would ideally need to be filled. We present here simple steps using one ensemble learning algorithm namely, Random Forests (Breiman, 2001) to (i) estimate monthly solar potential with good accuracy, (ii) provide information on the contribution of each feature in the estimation, and (iii) offer prediction intervals for each point estimate. We have selected Switzerland as an example. Using a Digital Elevation Model (DEM) along with monthly solar irradiance time series and weather data, we build monthly solar maps for Global Horizontal Irradiance (GHI), Diffuse Horizontal Irradiance (GHI), and Extraterrestrial Irradiance (EI). The weather data include monthly values for temperature, precipitation, sunshine duration, and cloud cover. In order to explain the impact of each feature on the solar irradiance of each point estimate, we extend the contribution method (Kuz'min et al., 2011) to a regression setting. Contribution maps for all features can then be computed for each solar map. This provides precious information on the spatial variation of the features impact all
Lorente, Laura; Salanova, Marisa; Martínez, Isabel M; Vera, María
2014-06-01
Traditionally, research focussing on psychosocial factors in the construction industry has focused mainly on the negative aspects of health and on results such as occupational accidents. This study, however, focuses on the specific relationships among the different positive psychosocial factors shared by construction workers that could be responsible for occupational well-being and outcomes such as performance. The main objective of this study was to test whether personal resources predict self-rated job performance through job resources and work engagement. Following the predictions of Bandura's Social Cognitive Theory and the motivational process of the Job Demands-Resources Model, we expect that the relationship between personal resources and performance will be fully mediated by job resources and work engagement. The sample consists of 228 construction workers. Structural equation modelling supports the research model. Personal resources (i.e. self-efficacy, mental and emotional competences) play a predicting role in the perception of job resources (i.e. job control and supervisor social support), which in turn leads to work engagement and self-rated performance. This study emphasises the crucial role that personal resources play in determining how people perceive job resources by determining the levels of work engagement and, hence, their self-rated job performance. Theoretical and practical implications are discussed. © 2014 International Union of Psychological Science.
International Nuclear Information System (INIS)
Kim, Sung Hee; Hong, Suk Yoon; Song, Jee Hun; Joo, Won Ho
2012-01-01
Noise from construction equipment affects not only surrounding residents, but also the operators of the machines. Noise that affects drivers must be evaluated during the preliminary design stage. This paper suggests an interior noise analysis procedure for construction equipment cabins. The analysis procedure, which can be used in the preliminary design stage, was investigated for airborne and structure borne noise. The total interior noise of a cabin was predicted from the airborne noise analysis and structure-borne noise analysis. The analysis procedure consists of four steps: modeling, vibration analysis, acoustic analysis and total interior noise analysis. A mesh model of a cabin for numerical analysis was made at the modeling step. At the vibration analysis step, the mesh model was verified and modal analysis and frequency response analysis are performed. At the acoustic analysis step, the vibration results from the vibration analysis step were used as initial values for radiated noise analysis and noise reduction analysis. Finally, the total cabin interior noise was predicted using the acoustic results from the acoustic analysis step. Each step was applied to a cabin of a middle-sized excavator and verified by comparison with measured data. The cabin interior noise of a middle-sized wheel loader and a large-sized forklift were predicted using the analysis procedure of the four steps and were compared with measured data. The interior noise analysis procedure of construction equipment cabins is expected to be used during the preliminary design stage
Energy Technology Data Exchange (ETDEWEB)
Kim, Sung Hee; Hong, Suk Yoon [Seoul National University, Seoul (Korea, Republic of); Song, Jee Hun [Chonnam National University, Gwangju (Korea, Republic of); Joo, Won Ho [Hyundai Heavy Industries Co. Ltd, Ulsan (Korea, Republic of)
2012-04-15
Noise from construction equipment affects not only surrounding residents, but also the operators of the machines. Noise that affects drivers must be evaluated during the preliminary design stage. This paper suggests an interior noise analysis procedure for construction equipment cabins. The analysis procedure, which can be used in the preliminary design stage, was investigated for airborne and structure borne noise. The total interior noise of a cabin was predicted from the airborne noise analysis and structure-borne noise analysis. The analysis procedure consists of four steps: modeling, vibration analysis, acoustic analysis and total interior noise analysis. A mesh model of a cabin for numerical analysis was made at the modeling step. At the vibration analysis step, the mesh model was verified and modal analysis and frequency response analysis are performed. At the acoustic analysis step, the vibration results from the vibration analysis step were used as initial values for radiated noise analysis and noise reduction analysis. Finally, the total cabin interior noise was predicted using the acoustic results from the acoustic analysis step. Each step was applied to a cabin of a middle-sized excavator and verified by comparison with measured data. The cabin interior noise of a middle-sized wheel loader and a large-sized forklift were predicted using the analysis procedure of the four steps and were compared with measured data. The interior noise analysis procedure of construction equipment cabins is expected to be used during the preliminary design stage.
Venkataraman, Kavita; Khoo, Chin Meng; Leow, Melvin K S; Khoo, Eric Y H; Isaac, Anburaj V; Zagorodnov, Vitali; Sadananthan, Suresh A; Velan, Sendhil S; Chong, Yap Seng; Gluckman, Peter; Lee, Jeannette; Salim, Agus; Tai, E Shyong; Lee, Yung Seng
2013-01-01
Accurate assessment of insulin sensitivity may better identify individuals at increased risk of cardio-metabolic diseases. To examine whether a combination of anthropometric, biochemical and imaging measures can better estimate insulin sensitivity index (ISI) and provide improved prediction of cardio-metabolic risk, in comparison to HOMA-IR. Healthy male volunteers (96 Chinese, 80 Malay, 77 Indian), 21 to 40 years, body mass index 18-30 kg/m(2). Predicted ISI (ISI-cal) was generated using 45 randomly selected Chinese through stepwise multiple linear regression, and validated in the rest using non-parametric correlation (Kendall's tau τ). In an independent longitudinal cohort, ISI-cal and HOMA-IR were compared for prediction of diabetes and cardiovascular disease (CVD), using ROC curves. The study was conducted in a university academic medical centre. ISI measured by hyperinsulinemic euglycemic glucose clamp, along with anthropometric measurements, biochemical assessment and imaging; incident diabetes and CVD. A combination of fasting insulin, serum triglycerides and waist-to-hip ratio (WHR) provided the best estimate of clamp-derived ISI (adjusted R(2) 0.58 versus 0.32 HOMA-IR). In an independent cohort, ROC areas under the curve were 0.77±0.02 ISI-cal versus 0.76±0.02 HOMA-IR (p>0.05) for incident diabetes, and 0.74±0.03 ISI-cal versus 0.61±0.03 HOMA-IR (pHOMA-IR. This may be useful for estimating insulin sensitivity and cardio-metabolic risk in clinical and epidemiological settings.
Estimation of respiratory heat flows in prediction of heat strain among Taiwanese steel workers.
Chen, Wang-Yi; Juang, Yow-Jer; Hsieh, Jung-Yu; Tsai, Perng-Jy; Chen, Chen-Peng
2017-01-01
International Organization for Standardization 7933 standard provides evaluation of required sweat rate (RSR) and predicted heat strain (PHS). This study examined and validated the approximations in these models estimating respiratory heat flows (RHFs) via convection (C res ) and evaporation (E res ) for application to Taiwanese foundry workers. The influence of change in RHF approximation to the validity of heat strain prediction in these models was also evaluated. The metabolic energy consumption and physiological quantities of these workers performing at different workloads under elevated wet-bulb globe temperature (30.3 ± 2.5 °C) were measured on-site and used in the calculation of RHFs and indices of heat strain. As the results show, the RSR model overestimated the C res for Taiwanese workers by approximately 3 % and underestimated the E res by 8 %. The C res approximation in the PHS model closely predicted the convective RHF, while the E res approximation over-predicted by 11 %. Linear regressions provided better fit in C res approximation (R 2 = 0.96) than in E res approximation (R 2 ≤ 0.85) in both models. The predicted C res deviated increasingly from the observed value when the WBGT reached 35 °C. The deviations of RHFs observed for the workers from those predicted using the RSR or PHS models did not significantly alter the heat loss via the skin, as the RHFs were in general of a level less than 5 % of the metabolic heat consumption. Validation of these approximations considering thermo-physiological responses of local workers is necessary for application in scenarios of significant heat exposure.
Uncertainty estimates for predictions of the impact of breeder-reactor radionuclide releases
International Nuclear Information System (INIS)
Miller, C.W.; Little, C.A.
1982-01-01
This paper summarizes estimates, compiled in a larger report, of the uncertainty associated with models and parameters used to assess the impact on man radionuclide releases to the environment by breeder reactor facilities. These estimates indicate that, for many sites, generic models and representative parameter values may reasonably be used to calculate doses from annual average radionuclide releases when these calculated doses are on the order of one-tenth or less of a relevant dose limit. For short-term, accidental releases, the uncertainty in the dose calculations may be much larger than an order of magnitude. As a result, it may be necessary to incorporate site-specific information into the dose calculation under such circumstances. However, even using site-specific information, inherent natural variability within human receptors, and the uncertainties in the dose conversion factor will likely result in an overall uncertainty of greater than an order of magnitude for predictions of dose following short-term releases
Using Monte Carlo/Gaussian Based Small Area Estimates to Predict Where Medicaid Patients Reside.
Behrens, Jess J; Wen, Xuejin; Goel, Satyender; Zhou, Jing; Fu, Lina; Kho, Abel N
2016-01-01
Electronic Health Records (EHR) are rapidly becoming accepted as tools for planning and population health 1,2 . With the national dialogue around Medicaid expansion 12 , the role of EHR data has become even more important. For their potential to be fully realized and contribute to these discussions, techniques for creating accurate small area estimates is vital. As such, we examined the efficacy of developing small area estimates for Medicaid patients in two locations, Albuquerque and Chicago, by using a Monte Carlo/Gaussian technique that has worked in accurately locating registered voters in North Carolina 11 . The Albuquerque data, which includes patient address, will first be used to assess the accuracy of the methodology. Subsequently, it will be combined with the EHR data from Chicago to develop a regression that predicts Medicaid patients by US Block Group. We seek to create a tool that is effective in translating EHR data's potential for population health studies.
DEFF Research Database (Denmark)
Kent, Peter; Boyle, Eleanor; Keating, Jennifer L
2017-01-01
OBJECTIVE: To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. STUDY DESIGN AND SETTING: An analysis of three pre-existing sets of large cohort data......, odds ratios and risk/prevalence ratios, for each sample size was calculated. RESULTS: There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same dataset when calculated in sample sizes below 400 people, and typically this variability...... stabilized in samples of 400 to 600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. CONCLUSION: To reduce sample-specific variability, contingency tables...
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
Recursive prediction error methods for online estimation in nonlinear state-space models
Directory of Open Access Journals (Sweden)
Dag Ljungquist
1994-04-01
Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.
Directory of Open Access Journals (Sweden)
Kavita Venkataraman
Full Text Available CONTEXT: Accurate assessment of insulin sensitivity may better identify individuals at increased risk of cardio-metabolic diseases. OBJECTIVES: To examine whether a combination of anthropometric, biochemical and imaging measures can better estimate insulin sensitivity index (ISI and provide improved prediction of cardio-metabolic risk, in comparison to HOMA-IR. DESIGN AND PARTICIPANTS: Healthy male volunteers (96 Chinese, 80 Malay, 77 Indian, 21 to 40 years, body mass index 18-30 kg/m(2. Predicted ISI (ISI-cal was generated using 45 randomly selected Chinese through stepwise multiple linear regression, and validated in the rest using non-parametric correlation (Kendall's tau τ. In an independent longitudinal cohort, ISI-cal and HOMA-IR were compared for prediction of diabetes and cardiovascular disease (CVD, using ROC curves. SETTING: The study was conducted in a university academic medical centre. OUTCOME MEASURES: ISI measured by hyperinsulinemic euglycemic glucose clamp, along with anthropometric measurements, biochemical assessment and imaging; incident diabetes and CVD. RESULTS: A combination of fasting insulin, serum triglycerides and waist-to-hip ratio (WHR provided the best estimate of clamp-derived ISI (adjusted R(2 0.58 versus 0.32 HOMA-IR. In an independent cohort, ROC areas under the curve were 0.77±0.02 ISI-cal versus 0.76±0.02 HOMA-IR (p>0.05 for incident diabetes, and 0.74±0.03 ISI-cal versus 0.61±0.03 HOMA-IR (p<0.001 for incident CVD. ISI-cal also had greater sensitivity than defined metabolic syndrome in predicting CVD, with a four-fold increase in the risk of CVD independent of metabolic syndrome. CONCLUSIONS: Triglycerides and WHR, combined with fasting insulin levels, provide a better estimate of current insulin resistance state and improved identification of individuals with future risk of CVD, compared to HOMA-IR. This may be useful for estimating insulin sensitivity and cardio-metabolic risk in clinical and
Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar
2015-06-01
Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
Xing, Yafei; Macq, Benoit
2017-11-01
With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.
Energy Technology Data Exchange (ETDEWEB)
Park, Seung Kook; Park, Hee Seong; Choi, Yoon Dong; Song, Chan Ho; Moon, Jei Kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
The KAERI (Korea Atomic Energy Research Institute) has developed the DECOMMIS (Decommissioning Information Management System) and have been applied for the decommissioning project of the KRR (Korea Research Reactor)-1 and 2 and UCP (Uranium Conversion Plant), as the meaning of the first decommissioning project in Korea. All information and data which are from the decommissioning activities are input, saved, output and managed in the DECOMMIS. This system was consists of the web server and the database server. The users could be access through a web page, depending on the input, processing and output, and be modified the permissions to do such activities can after the decommissioning activities have created the initial system-wide data is stored. When it could be used the experienced data from DECOMMIS, the cost estimation on the new facilities for the decommissioning planning will be established with the basic frame of the WBS structures and its codes. In this paper, the prediction on the cost estimation through using the experienced data which were store in DECOMMIS was studied. For the new decommissioning project on the nuclear facilities in the future, through this paper, the cost estimation for the decommissioning using the experienced data which were WBS codes, unit-work productivity factors and annual governmental unit labor cost is proposed. These data were from the KRR and UCP decommissioning project. The differences on the WBS code sectors and facility characterization between new objected components and experienced dismantled components was reduces as scaling factors. The study on the establishment the scaling factors and cost prediction for the cost estimation is developing with the algorithms from the productivity data, now.
Ruch, Nicole; Joss, Franziska; Jimmy, Gerda; Melzer, Katarina; Hänggi, Johanna; Mäder, Urs
2013-11-01
The aim of this study was to compare the energy expenditure (EE) estimations of activity-specific prediction equations (ASPE) and of an artificial neural network (ANNEE) based on accelerometry with measured EE. Forty-three children (age: 9.8 ± 2.4 yr) performed eight different activities. They were equipped with one tri-axial accelerometer that collected data in 1-s epochs and a portable gas analyzer. The ASPE and the ANNEE were trained to estimate the EE by including accelerometry, age, gender, and weight of the participants. To provide the activity-specific information, a decision tree was trained to recognize the type of activity through accelerometer data. The ASPE were applied to the activity-type-specific data recognized by the tree (Tree-ASPE). The Tree-ASPE precisely estimated the EE of all activities except cycling [bias: -1.13 ± 1.33 metabolic equivalent (MET)] and walking (bias: 0.29 ± 0.64 MET; P MET) and walking (bias: 0.61 ± 0.72 MET) and underestimated the EE of cycling (bias: -0.90 ± 1.18 MET; P MET, Tree-ASPE: 0.08 ± 0.21 MET) and walking (ANNEE 0.61 ± 0.72 MET, Tree-ASPE: 0.29 ± 0.64 MET) were significantly smaller in the Tree-ASPE than in the ANNEE (P < 0.05). The Tree-ASPE was more precise in estimating the EE than the ANNEE. The use of activity-type-specific information for subsequent EE prediction equations might be a promising approach for future studies.
Ruilope, Luis M; Zanchetti, Alberto; Julius, Stevo; McInnes, Gordon T; Segura, Julian; Stolt, Pelle; Hua, Tsushung A; Weber, Michael A; Jamerson, Ken
2007-07-01
Reduced renal function is predictive of poor cardiovascular outcomes but the predictive value of different measures of renal function is uncertain. We compared the value of estimated creatinine clearance, using the Cockcroft-Gault formula, with that of estimated glomerular filtration rate (GFR), using the Modification of Diet in Renal Disease (MDRD) formula, as predictors of cardiovascular outcome in 15 245 high-risk hypertensive participants in the Valsartan Antihypertensive Long-term Use Evaluation (VALUE) trial. For the primary end-point, the three secondary end-points and for all-cause death, outcomes were compared for individuals with baseline estimated creatinine clearance and estimated GFR or = 60 ml/min using hazard ratios and 95% confidence intervals. Coronary heart disease, left ventricular hypertrophy, age, sex and treatment effects were included as covariates in the model. For each end-point considered, the risk in individuals with poor renal function at baseline was greater than in those with better renal function. Estimated creatinine clearance (Cockcroft-Gault) was significantly predictive only of all-cause death [hazard ratio = 1.223, 95% confidence interval (CI) = 1.076-1.390; P = 0.0021] whereas estimated GFR was predictive of all outcomes except stroke. Hazard ratios (95% CIs) for estimated GFR were: primary cardiac end-point, 1.497 (1.332-1.682), P cause death, 1.231 (1.098-1.380), P = 0.0004. These results indicate that estimated glomerular filtration rate calculated with the MDRD formula is more informative than estimated creatinine clearance (Cockcroft-Gault) in the prediction of cardiovascular outcomes.
Directory of Open Access Journals (Sweden)
Ольга Юрьевна Заславская
2010-12-01
Full Text Available In article features of realisation of the mechanism of construction of an optimum trajectory of education to computer science on the basis of a dynamic integrated estimation of level of knowledge are considered.
2014-03-01
The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Better estimation of protein-DNA interaction parameters improve prediction of functional sites
Directory of Open Access Journals (Sweden)
O'Flanagan Ruadhan A
2008-12-01
Full Text Available Abstract Background Characterizing transcription factor binding motifs is a common bioinformatics task. For transcription factors with variable binding sites, we need to get many suboptimal binding sites in our training dataset to get accurate estimates of free energy penalties for deviating from the consensus DNA sequence. One procedure to do that involves a modified SELEX (Systematic Evolution of Ligands by Exponential Enrichment method designed to produce many such sequences. Results We analyzed low stringency SELEX data for E. coli Catabolic Activator Protein (CAP, and we show here that appropriate quantitative analysis improves our ability to predict in vitro affinity. To obtain large number of sequences required for this analysis we used a SELEX SAGE protocol developed by Roulet et al. The sequences obtained from here were subjected to bioinformatic analysis. The resulting bioinformatic model characterizes the sequence specificity of the protein more accurately than those sequence specificities predicted from previous analysis just by using a few known binding sites available in the literature. The consequences of this increase in accuracy for prediction of in vivo binding sites (and especially functional ones in the E. coli genome are also discussed. We measured the dissociation constants of several putative CAP binding sites by EMSA (Electrophoretic Mobility Shift Assay and compared the affinities to the bioinformatics scores provided by methods like the weight matrix method and QPMEME (Quadratic Programming Method of Energy Matrix Estimation trained on known binding sites as well as on the new sites from SELEX SAGE data. We also checked predicted genome sites for conservation in the related species S. typhimurium. We found that bioinformatics scores based on SELEX SAGE data does better in terms of prediction of physical binding energies as well as in detecting functional sites. Conclusion We think that training binding site detection
Energy Technology Data Exchange (ETDEWEB)
Merkulov, N.E.; Lysenkov, P.P.
1981-01-01
Estimated are the reliable predicted reserves of oil and gas of the Chechen-Ingushetia by methods of probability calculations. Calculations were made separately for each oil-bearing lithologic-stratigraphic horizon. The computation results are summarized in a table, and graphs are constructed.
The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.
Nevill, Alan M; Cooke, Carlton B
2017-05-01
This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist
Directory of Open Access Journals (Sweden)
Sineeva Natalya
2018-01-01
Full Text Available Our study relevance is due to the increasing man-made impact on water bodies and associated land resources within the urban areas, as a consequence, by a change in the morphology and dynamics of Rivers’ canals. This leads to the need to predict the development of erosion-accumulation processes, especially within the built-up urban areas. Purpose of the study is to develop programs on the assessment of erosion-accumulation processes at a water body, a mouth area of the Inia River, in the of perspective high-rise construction zone of a residential microdistrict, the place, where floodplain-channel complex is intensively expected to develop. Results of the study: Within the velocities of the water flow comparing, full-scale measured conditions, and calculated from the model, a slight discrepancy was recorded. This allows us to say that the numerical model reliably describes the physical processes developing in the River. The carried out calculations to assess the direction and intensity of the channel re-formations, made us possible to conclude, there was an insignificant predominance of erosion processes over the accumulative ones on the undeveloped part of the Inia River (the processes activity is noticeable only in certain areas (by the coasts and the island. Importance of the study: The study on the erosion-accumulation processes evaluation can be used in design decisions for the future high-rise construction of this territory, which will increase their economic efficiency.
Sineeva, Natalya
2018-03-01
Our study relevance is due to the increasing man-made impact on water bodies and associated land resources within the urban areas, as a consequence, by a change in the morphology and dynamics of Rivers' canals. This leads to the need to predict the development of erosion-accumulation processes, especially within the built-up urban areas. Purpose of the study is to develop programs on the assessment of erosion-accumulation processes at a water body, a mouth area of the Inia River, in the of perspective high-rise construction zone of a residential microdistrict, the place, where floodplain-channel complex is intensively expected to develop. Results of the study: Within the velocities of the water flow comparing, full-scale measured conditions, and calculated from the model, a slight discrepancy was recorded. This allows us to say that the numerical model reliably describes the physical processes developing in the River. The carried out calculations to assess the direction and intensity of the channel re-formations, made us possible to conclude, there was an insignificant predominance of erosion processes over the accumulative ones on the undeveloped part of the Inia River (the processes activity is noticeable only in certain areas (by the coasts and the island)). Importance of the study: The study on the erosion-accumulation processes evaluation can be used in design decisions for the future high-rise construction of this territory, which will increase their economic efficiency.
Construction and evaluation of FiND, a fall risk prediction model of inpatients from nursing data.
Yokota, Shinichiroh; Ohe, Kazuhiko
2016-04-01
To construct and evaluate an easy-to-use fall risk prediction model based on the daily condition of inpatients from secondary use electronic medical record system data. The present authors scrutinized electronic medical record system data and created a dataset for analysis by including inpatient fall report data and Intensity of Nursing Care Needs data. The authors divided the analysis dataset into training data and testing data, then constructed the fall risk prediction model FiND from the training data, and tested the model using the testing data. The dataset for analysis contained 1,230,604 records from 46,241 patients. The sensitivity of the model constructed from the training data was 71.3% and the specificity was 66.0%. The verification result from the testing dataset was almost equivalent to the theoretical value. Although the model's accuracy did not surpass that of models developed in previous research, the authors believe FiND will be useful in medical institutions all over Japan because it is composed of few variables (only age, sex, and the Intensity of Nursing Care Needs items), and the accuracy for unknown data was clear. © 2016 Japan Academy of Nursing Science.
Di Maria, Francesco; Bianconi, Francesco; Micale, Caterina; Baglioni, Stefano; Marionni, Moreno
2016-02-01
The size distribution of aggregates has direct and important effects on fundamental properties of construction materials such as workability, strength and durability. The size distribution of aggregates from construction and demolition waste (C&D) is one of the parameters which determine the degree of recyclability and therefore the quality of such materials. Unfortunately, standard methods like sieving or laser diffraction can be either very time consuming (sieving) or possible only in laboratory conditions (laser diffraction). As an alternative we propose and evaluate the use of image analysis to estimate the size distribution of aggregates from C&D in a fast yet accurate manner. The effectiveness of the procedure was tested on aggregates generated by an existing C&D mechanical treatment plant. Experimental comparison with manual sieving showed agreement in the range 81-85%. The proposed technique demonstrated potential for being used on on-line systems within mechanical treatment plants of C&D. Copyright © 2015 Elsevier Ltd. All rights reserved.
National Aeronautics and Space Administration — ZONA Technology, Inc. (ZONA) proposes to develop an on-line flutter prediction tool for wind tunnel model using the parameter varying estimation (PVE) technique to...
Glatt, C. R.; Reiners, S. J.; Hague, D. S.
1975-01-01
A computerized method for storing, updating and augmenting experimentally determined overpressure signatures has been developed. A data base of pressure signatures for a shuttle type vehicle has been stored. The data base has been used for the prediction of sonic boom with the program described in Volume I.
Constructing multi-labelled decision trees for junction design using the predicted probabilities
Bezembinder, Erwin M.; Wismans, Luc J. J.; Van Berkum, Eric C.
2017-01-01
In this paper, we evaluate the use of traditional decision tree algorithms CRT, CHAID and QUEST to determine a decision tree which can be used to predict a set of (Pareto optimal) junction design alternatives (e.g. signal or roundabout) for a given traffic demand pattern and available space. This is
Directory of Open Access Journals (Sweden)
Guosheng Su
Full Text Available Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1 a simple additive genetic model (MA, 2 a model including both additive and additive by additive epistatic genetic effects (MAE, 3 a model including both additive and dominance genetic effects (MAD, and 4 a full model including all three genetic components (MAED. Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions.
Directory of Open Access Journals (Sweden)
Plebankiewicz E.
2015-09-01
Full Text Available The article presents briefly several methods of working time estimation. However, three methods of task duration assessment have been selected to investigate working time in a real construction project using the data collected from observing workers laying terrazzo flooring in staircases. The first estimation has been done by calculating a normal and a triangular function. The next method, which is the focus of greatest attention here, is PERT. The article presents a way to standardize the results and the procedure algorithm allowing determination of the characteristic values for the method. Times to perform every singular component sub-task as well as the whole task have been defined for the collected data with the reliability level of 85%. The completion time of the same works has also been calculated with the use of the KNR. The obtained result is much higher than the actual time needed for execution of the task calculated with the use of the previous method. The authors argue that PERT is the best method of all three, because it takes into account the randomness of the entire task duration and it can be based on the actual execution time known from research.
Ko, Sae Hee; Bandyk, Dennis F; Hodgkiss-Harlow, Kelley D; Barleben, Andrew; Lane, John
2015-06-01
This study validated duplex ultrasound measurement of brachial artery volume flow (VF) as predictor of dialysis access flow maturation and successful hemodialysis. Duplex ultrasound was used to image upper extremity dialysis access anatomy and estimate access VF within 1 to 2 weeks of the procedure. Correlation of brachial artery VF with dialysis access conduit VF was performed using a standardized duplex testing protocol in 75 patients. The hemodynamic data were used to develop brachial artery flow velocity criteria (peak systolic velocity and end-diastolic velocity) predictive of three VF categories: low (800 mL/min). Brachial artery VF was then measured in 148 patients after a primary (n = 86) or revised (n = 62) upper extremity dialysis access procedure, and the VF category correlated with access maturation or need for revision before hemodialysis usage. Access maturation was conferred when brachial artery VF was >600 mL/min and conduit imaging indicated successful cannulation based on anatomic criteria of conduit diameter >5 mm and skin depth 800 mL/min was predicted when the brachial artery lumen diameter was >4.5 mm, peak systolic velocity was >150 cm/s, and the diastolic-to-systolic velocity ratio was >0.4. Brachial artery velocity spectra indicating VF 800 mL/min. Duplex testing to estimate brachial artery VF and assess the conduit for ease of cannulation can be performed in 5 minutes during the initial postoperative vascular clinic evaluation. Estimation of brachial artery VF using the duplex ultrasound, termed the "Fast, 5-min Dialysis Duplex Scan," facilitates patient evaluation after new or revised upper extremity dialysis access procedures. Brachial artery VF correlates with access VF measurements and has the advantage of being easier to perform and applicable for forearm and also arm dialysis access. When brachial artery velocity spectra criteria confirm a VF >800 mL/min, flow maturation and successful hemodialysis are predicted if anatomic criteria
A comprehensive software suite for protein family construction and functional site prediction.
Directory of Open Access Journals (Sweden)
David Renfrew Haft
Full Text Available In functionally diverse protein families, conservation in short signature regions may outperform full-length sequence comparisons for identifying proteins that belong to a subgroup within which one specific aspect of their function is conserved. The SIMBAL workflow (Sites Inferred by Metabolic Background Assertion Labeling is a data-mining procedure for finding such signature regions. It begins by using clues from genomic context, such as co-occurrence or conserved gene neighborhoods, to build a useful training set from a large number of uncharacterized but mutually homologous proteins. When training set construction is successful, the YES partition is enriched in proteins that share function with the user's query sequence, while the NO partition is depleted. A selected query sequence is then mined for short signature regions whose closest matches overwhelmingly favor proteins from the YES partition. High-scoring signature regions typically contain key residues critical to functional specificity, so proteins with the highest sequence similarity across these regions tend to share the same function. The SIMBAL algorithm was described previously, but significant manual effort, expertise, and a supporting software infrastructure were required to prepare the requisite training sets. Here, we describe a new, distributable software suite that speeds up and simplifies the process for using SIMBAL, most notably by providing tools that automate training set construction. These tools have broad utility for comparative genomics, allowing for flexible collection of proteins or protein domains based on genomic context as well as homology, a capability that can greatly assist in protein family construction. Armed with this new software suite, SIMBAL can serve as a fast and powerful in silico alternative to direct experimentation for characterizing proteins and their functional interactions.
Using PROGUMBEL to predict extreme external hazards during nuclear power plant construction
International Nuclear Information System (INIS)
Diburg, S.; Hoelscher, N.; Niemann, H.J.; Meiswinkel, R.
2010-01-01
Safety considerations concerning the construction of power plants, supporting structure planning, safety concept and structural design require reliable data on external events, their incidence probability and characteristic parameters. The basis for supporting structure calculations based on probabilistic reliability considerations is the knowledge on the statistical distribution or the incidence frequency of specific phenomena and their characteristic basic variables. The extreme value statistics software PRO GUMBEL is the extended version of the original GUMBEL software used for seismic assessments. The authors describe the features of the software, that covers seismic events, flooding and extreme storms.
Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann
2018-03-01
Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.
Directory of Open Access Journals (Sweden)
Emma Mares-García
2017-06-01
Full Text Available Background Other studies have assessed nonadherence to proton pump inhibitors (PPIs, but none has developed a screening test for its detection. Objectives To construct and internally validate a predictive model for nonadherence to PPIs. Methods This prospective observational study with a one-month follow-up was carried out in 2013 in Spain, and included 302 patients with a prescription for PPIs. The primary variable was nonadherence to PPIs (pill count. Secondary variables were gender, age, antidepressants, type of PPI, non-guideline-recommended prescription (NGRP of PPIs, and total number of drugs. With the secondary variables, a binary logistic regression model to predict nonadherence was constructed and adapted to a points system. The ROC curve, with its area (AUC, was calculated and the optimal cut-off point was established. The points system was internally validated through 1,000 bootstrap samples and implemented in a mobile application (Android. Results The points system had three prognostic variables: total number of drugs, NGRP of PPIs, and antidepressants. The AUC was 0.87 (95% CI [0.83–0.91], p < 0.001. The test yielded a sensitivity of 0.80 (95% CI [0.70–0.87] and a specificity of 0.82 (95% CI [0.76–0.87]. The three parameters were very similar in the bootstrap validation. Conclusions A points system to predict nonadherence to PPIs has been constructed, internally validated and implemented in a mobile application. Provided similar results are obtained in external validation studies, we will have a screening tool to detect nonadherence to PPIs.
The construction of life prediction models for the design of Stirling engine heater components
Petrovich, A.; Bright, A.; Cronin, M.; Arnold, S.
1983-01-01
The service life of Stirling-engine heater structures of Fe-based high-temperature alloys is predicted using a numerical model based on a linear-damage approach and published test data (engine test data for a Co-based alloy and tensile-test results for both the Co-based and the Fe-based alloys). The operating principle of the automotive Stirling engine is reviewed; the economic and technical factors affecting the choice of heater material are surveyed; the test results are summarized in tables and graphs; the engine environment and automotive duty cycle are characterized; and the modeling procedure is explained. It is found that the statistical scatter of the fatigue properties of the heater components needs to be reduced (by decreasing the porosity of the cast material or employing wrought material in fatigue-prone locations) before the accuracy of life predictions can be improved.
Eppard, Randy G.
2004-01-01
The purpose of this study was to test a predictive model of several components of organizational and leadership Culture in a large sample of municipal employees using three sets of predictors: demographic/employment status of employees, measures of employeesâ judgments of their supervisorâ s transactional leadership styles, and measures of employeesâ judgments of their supervisorâ s transformational leadership style. To what extent does transformational and transactional leadership (bot...
Complex data modeling and computationally intensive methods for estimation and prediction
Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics
2015-01-01
The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...
International Nuclear Information System (INIS)
Hensel, S.J.; Hayes, D.W.
1993-01-01
A simple parameter estimation method has been developed to determine the dispersion and velocity parameters associated with stream/river transport. The unsteady one dimensional Burgers' equation was chosen as the model equation, and the method has been applied to recent Savannah River dye tracer studies. The computed Savannah River transport coefficients compare favorably with documented values, and the time/concentration curves calculated from these coefficients compare well with the actual tracer data. The coefficients were used as a predictive capability and applied to Savannah River tritium concentration data obtained during the December 1991 accidental tritium discharge from the Savannah River Site. The peak tritium concentration at the intersection of Highway 301 and the Savannah River was underpredicted by only 5% using the coefficients computed from the dye data
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
High Sensitivity TSS Prediction: Estimates of Locations Where TSS Cannot Occur
Schaefer, Ulf
2013-10-10
Background Although transcription in mammalian genomes can initiate from various genomic positions (e.g., 3′UTR, coding exons, etc.), most locations on genomes are not prone to transcription initiation. It is of practical and theoretical interest to be able to estimate such collections of non-TSS locations (NTLs). The identification of large portions of NTLs can contribute to better focusing the search for TSS locations and thus contribute to promoter and gene finding. It can help in the assessment of 5′ completeness of expressed sequences, contribute to more successful experimental designs, as well as more accurate gene annotation. Methodology Using comprehensive collections of Cap Analysis of Gene Expression (CAGE) and other transcript data from mouse and human genomes, we developed a methodology that allows us, by performing computational TSS prediction with very high sensitivity, to annotate, with a high accuracy in a strand specific manner, locations of mammalian genomes that are highly unlikely to harbor transcription start sites (TSSs). The properties of the immediate genomic neighborhood of 98,682 accurately determined mouse and 113,814 human TSSs are used to determine features that distinguish genomic transcription initiation locations from those that are not likely to initiate transcription. In our algorithm we utilize various constraining properties of features identified in the upstream and downstream regions around TSSs, as well as statistical analyses of these surrounding regions. Conclusions Our analysis of human chromosomes 4, 21 and 22 estimates ~46%, ~41% and ~27% of these chromosomes, respectively, as being NTLs. This suggests that on average more than 40% of the human genome can be expected to be highly unlikely to initiate transcription. Our method represents the first one that utilizes high-sensitivity TSS prediction to identify, with high accuracy, large portions of mammalian genomes as NTLs. The server with our algorithm implemented is
Boerner, V; Johnston, D; Wu, X-L; Bauck, S
2015-02-01
Genomically estimated breeding values (GEBV) for Angus beef cattle are available from at least 2 commercial suppliers (Igenity [http://www.igenity.com] and Zoetis [http://www.zoetis.com]). The utility of these GEBV for improving genetic evaluation depends on their accuracies, which can be estimated by the genetic correlation with phenotypic target traits. Genomically estimated breeding values of 1,032 Angus bulls calculated from prediction equations (PE) derived by 2 different procedures in the U.S. Angus population were supplied by Igenity. Both procedures were based on Illuminia BovineSNP50 BeadChip genotypes. In procedure sg, GEBV were calculated from PE that used subsets of only 392 SNP, where these subsets were individually selected for each trait by BayesCπ. In procedure rg GEBV were calculated from PE derived in a ridge regression approach using all available SNP. Because the total set of 1,032 bulls with GEBV contained 732 individuals used in the Igenity training population, GEBV subsets were formed characterized by a decreasing average relationship between individuals in the subsets and individuals in the training population. Accuracies of GEBV were estimated as genetic correlations between GEBV and their phenotypic target traits modeling GEBV as trait observations in a bivariate REML approach, in which phenotypic observations were those recorded in the commercial Australian Angus seed stock sector. Using results from the GEBV subset excluding all training individuals as a reference, estimated accuracies were generally in agreement with those already published, with both types of GEBV (sg and rg) yielding similar results. Accuracies for growth traits ranged from 0.29 to 0.45, for reproductive traits from 0.11 to 0.53, and for carcass traits from 0.3 to 0.75. Accuracies generally decreased with an increasing genetic distance between the training and the validation population. However, for some carcass traits characterized by a low number of phenotypic
Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y
2014-09-15
Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.
Prediction of hospital mortality by changes in the estimated glomerular filtration rate (eGFR).
LENUS (Irish Health Repository)
Berzan, E
2015-03-01
Deterioration of physiological or laboratory variables may provide important prognostic information. We have studied whether a change in estimated glomerular filtration rate (eGFR) value calculated using the (Modification of Diet in Renal Disease (MDRD) formula) over the hospital admission, would have predictive value. An analysis was performed on all emergency medical hospital episodes (N = 61964) admitted between 1 January 2002 and 31 December 2011. A stepwise logistic regression model examined the relationship between mortality and change in renal function from admission to discharge. The fully adjusted Odds Ratios (OR) for 5 classes of GFR deterioration showed a stepwise increased risk of 30-day death with OR\\'s of 1.42 (95% CI: 1.20, 1.68), 1.59 (1.27, 1.99), 2.71 (2.24, 3.27), 5.56 (4.54, 6.81) and 11.9 (9.0, 15.6) respectively. The change in eGFR during a clinical episode, following an emergency medical admission, powerfully predicts the outcome.
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico
2001-01-01
The control system of a reactor should be able to predict, in real time, the amount of reactivity to be inserted (e.g., by control rod movements and boron injection and dilution) to respond to a given electrical load demand or to undesired, accidental transients. The real-time constraint renders impractical the use of a large, detailed dynamic reactor code. One has, then, to resort to simplified analytical models with lumped effective parameters suitably estimated from the reactor data.The simple and well-known Chernick model for describing the reactor power evolution in the presence of xenon is considered and the feasibility of using genetic algorithms for estimating the effective nuclear parameters involved and the initial nonmeasurable xenon and iodine conditions is investigated. This approach has the advantage of counterbalancing the inherent model simplicity with the periodic reestimation of the effective parameter values pertaining to each reactor on the basis of its recent history. By so doing, other effects, such as burnup, are automatically taken into account
Daily House Price Indices: Construction, Modeling, and Longer-Run Predictions
DEFF Research Database (Denmark)
Bollerslev, Tim; Patton, Andrew J.; Wang, Wenjing
We construct daily house price indices for ten major U.S. metropolitan areas. Our calculations are based on a comprehensive database of several million residential property transactions and a standard repeat-sales method that closely mimics the methodology of the popular monthly Case-Shiller house...... price indices. Our new daily house price indices exhibit dynamic features similar to those of other daily asset prices, with mild autocorrelation and strong conditional heteroskedasticity of the corresponding daily returns. A relatively simple multivariate time series model for the daily house price...... index returns, explicitly allowing for commonalities across cities and GARCH effects, produces forecasts of monthly house price changes that are superior to various alternative forecast procedures based on lower frequency data....
Directory of Open Access Journals (Sweden)
Wingender Edgar
2005-01-01
Full Text Available Abstract Background Binding of a bacteria to a eukaryotic cell triggers a complex network of interactions in and between both cells. P. aeruginosa is a pathogen that causes acute and chronic lung infections by interacting with the pulmonary epithelial cells. We use this example for examining the ways of triggering the response of the eukaryotic cell(s, leading us to a better understanding of the details of the inflammatory process in general. Results Considering a set of genes co-expressed during the antibacterial response of human lung epithelial cells, we constructed a promoter model for the search of additional target genes potentially involved in the same cell response. The model construction is based on the consideration of pair-wise combinations of transcription factor binding sites (TFBS. It has been shown that the antibacterial response of human epithelial cells is triggered by at least two distinct pathways. We therefore supposed that there are two subsets of promoters activated by each of them. Optimally, they should be "complementary" in the sense of appearing in complementary subsets of the (+-training set. We developed the concept of complementary pairs, i.e., two mutually exclusive pairs of TFBS, each of which should be found in one of the two complementary subsets. Conclusions We suggest a simple, but exhaustive method for searching for TFBS pairs which characterize the whole (+-training set, as well as for complementary pairs. Applying this method, we came up with a promoter model of antibacterial response genes that consists of one TFBS pair which should be found in the whole training set and four complementary pairs. We applied this model to screening of 13,000 upstream regions of human genes and identified 430 new target genes which are potentially involved in antibacterial defense mechanisms.
2013-01-01
Background Measures used for medical student selection should predict future performance during training. A problem for any selection study is that predictor-outcome correlations are known only in those who have been selected, whereas selectors need to know how measures would predict in the entire pool of applicants. That problem of interpretation can be solved by calculating construct-level predictive validity, an estimate of true predictor-outcome correlation across the range of applicant abilities. Methods Construct-level predictive validities were calculated in six cohort studies of medical student selection and training (student entry, 1972 to 2009) for a range of predictors, including A-levels, General Certificates of Secondary Education (GCSEs)/O-levels, and aptitude tests (AH5 and UK Clinical Aptitude Test (UKCAT)). Outcomes included undergraduate basic medical science and finals assessments, as well as postgraduate measures of Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK)) performance and entry in the Specialist Register. Construct-level predictive validity was calculated with the method of Hunter, Schmidt and Le (2006), adapted to correct for right-censorship of examination results due to grade inflation. Results Meta-regression analyzed 57 separate predictor-outcome correlations (POCs) and construct-level predictive validities (CLPVs). Mean CLPVs are substantially higher (.450) than mean POCs (.171). Mean CLPVs for first-year examinations, were high for A-levels (.809; CI: .501 to .935), and lower for GCSEs/O-levels (.332; CI: .024 to .583) and UKCAT (mean = .245; CI: .207 to .276). A-levels had higher CLPVs for all undergraduate and postgraduate assessments than did GCSEs/O-levels and intellectual aptitude tests. CLPVs of educational attainment measures decline somewhat during training, but continue to predict postgraduate performance. Intellectual aptitude tests have lower CLPVs than A-levels or GCSEs
Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.
2018-03-01
Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.
Directory of Open Access Journals (Sweden)
Hongxiao Yu
2015-05-01
Full Text Available Trajectory tracking and state estimation are significant in the motion planning and intelligent vehicle control. This article focuses on the model predictive control approach for the trajectory tracking of the intelligent vehicles and state estimation of the nonlinear vehicle system. The constraints of the system states are considered when applying the model predictive control method to the practical problem, while 4-degree-of-freedom vehicle model and unscented Kalman filter are proposed to estimate the vehicle states. The estimated states of the vehicle are used to provide model predictive control with real-time control and judge vehicle stability. Furthermore, in order to decrease the cost of solving the nonlinear optimization, the linear time-varying model predictive control is used at each time step. The effectiveness of the proposed vehicle state estimation and model predictive control method is tested by driving simulator. The results of simulations and experiments show that great and robust performance is achieved for trajectory tracking and state estimation in different scenarios.
International Nuclear Information System (INIS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-01-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally. -- Highlights: • A rapid interpolation method of system matrices (H) is proposed, named DW-GIMGPE. • Reduce H acquisition time by 15.2× with simplified grid scan and 2× interpolation. • Reconstructions of a hot-rod phantom with measured and DW-GIMGPE H were similar. • The imaging study of normal
Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction
Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc
2018-02-01
Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.
Van Nuland, Hanneke J C; Dusseldorp, Elise; Martens, Rob L; Boekaerts, Monique
2010-08-01
Different theoretical viewpoints on motivation make it hard to decide which model has the best potential to provide valid predictions on classroom performance. This study was designed to explore motivation constructs derived from different motivation perspectives that predict performance on a novel task best. Motivation constructs from self-determination theory, self-regulation theory, and achievement goal theory were investigated in tandem. Performance was measured by systematicity (i.e. how systematically students worked on a problem-solving task) and test score (i.e. score on a multiple-choice test). Hierarchical regression analyses on data from 259 secondary school students showed a quadratic relation between a performance avoidance orientation and both performance outcomes, indicating that extreme high and low performance avoidance resulted in the lowest performance. Furthermore, two three-way interaction effects were found. Intrinsic motivation seemed to play a key role in test score and systematicity performance, provided that effort regulation and metacognitive skills were both high. Results indicate that intrinsic motivation in itself is not enough to attain a good performance. Instead, a moderate score on performance avoidance, together with the ability to remain motivated and effectively regulate and control task behavior, is needed to attain a good performance. High time management skills also contributed to higher test score and systematicity performance and a low performance approach orientation contributed to higher systematicity performance. We concluded that self-regulatory skills should be trained in order to have intrinsically motivated students perform well on novel tasks in the classroom.
A hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments.
Shamim, M. A.; Bray, M.; Ishak, A. M.; Remesan, R.; Han, D.
2009-09-01
The importance of solar radiation on earth's surface is depicted in its wide range of applications in the fields of meteorology, agricultural sciences, engineering, hydrology, crop water requirements, climatic changes and energy assessment. It is quite random in nature as it has to go through different processes of assimilation and dispersion while on its way to earth. Compared to other meteorological parameters, solar radiation is quite infrequently measured, for example, the worldwide ratio of stations collecting solar radiation to those collecting temperature is 1:500 (Badescu, 2008). Researchers, therefore, have to rely on indirect techniques of estimation that include nonlinear models, artificial intelligence (e.g. neural networks), remote sensing and numerical weather predictions (NWP). This study proposes a hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments. It uses the PSU/NCAR's Mesoscale Modelling system (MM5) (Grell et al., 1995) to parameterise the cloud effect on extraterrestrial radiation by dividing the atmosphere into four layers of very high (6-12 km), high (3-6 km), medium (1.5-3) and low (0-1.5) altitudes from earth. It is believed that various cloud forms exist within each of these layers. An hourly time series of upper air pressure and relative humidity data sets corresponding to all of these layers is determined for the Brue catchment, southwest UK, using MM5. Cloud Index (CI) was then determined using (Yang and Koike, 2002): 1 p?bi [ (Rh - Rh )] ci =------- max 0.0,---------cri dp pbi - ptipti (1- Rhcri) where, pbi and pti represent the air pressure at the top and bottom of each layer and Rhcri is the critical value of relative humidity at which a certain cloud type is formed. Output from a global clear sky solar radiation model (MRM v-5) (Kambezidis and Psiloglu, 2008) is used along with meteorological datasets of temperature and precipitation and astronomical information. The analysis is aided by the
Directory of Open Access Journals (Sweden)
Zheng Huiru
2009-01-01
Full Text Available Abstract Background Information about protein interaction networks is fundamental to understanding protein function and cellular processes. Interaction patterns among proteins can suggest new drug targets and aid in the design of new therapeutic interventions. Efforts have been made to map interactions on a proteomic-wide scale using both experimental and computational techniques. Reference datasets that contain known interacting proteins (positive cases and non-interacting proteins (negative cases are essential to support computational prediction and validation of protein-protein interactions. Information on known interacting and non interacting proteins are usually stored within databases. Extraction of these data can be both complex and time consuming. Although, the automatic construction of reference datasets for classification is a useful resource for researchers no public resource currently exists to perform this task. Results GRIP (Gold Reference dataset constructor from Information on Protein complexes is a web-based system that provides researchers with the functionality to create reference datasets for protein-protein interaction prediction in Saccharomyces cerevisiae. Both positive and negative cases for a reference dataset can be extracted, organised and downloaded by the user. GRIP also provides an upload facility whereby users can submit proteins to determine protein complex membership. A search facility is provided where a user can search for protein complex information in Saccharomyces cerevisiae. Conclusion GRIP is developed to retrieve information on protein complex, cellular localisation, and physical and genetic interactions in Saccharomyces cerevisiae. Manual construction of reference datasets can be a time consuming process requiring programming knowledge. GRIP simplifies and speeds up this process by allowing users to automatically construct reference datasets. GRIP is free to access at http://rosalind.infj.ulst.ac.uk/GRIP/.
Xia, Zhen Wei; Cui, Wen Jun; Zhou, Wen Pu; Zhang, Xue Hong; Shen, Qing Xiang; Li, Yun Zhu; Yu, Shan Chang
2004-10-01
Human Heme Oxygenase-1 (hHO-1) is the rate-limiting enzyme in the catabolism reaction of heme, which directly regulates the concentration of bilirubin in human body. The mutant structure was simulated by Swiss-pdbviewer procedure, which showed that the structure of active pocket was changed distinctly after Ala25 substituted for His25 in active domain, but the mutated enzyme still binded with heme. On the basis of the results, the expression vectors, pBHO-1 and pBHO-1(M), were constructed, induced by IPTG and expressed in E. coli DH5alpha strain. The expression products were purified with 30%-60% saturation (NH4)2SO4 and Q-Sepharose Fast Flow column chromatography. The concentration of hHO-1 in 30%-60% saturation (NH4)2SO4 components and in fractions through twice column chromatography was 3.6-fold and 30-fold higher than that in initial product, respectively. The activity of wild hHO-1 (whHO-1) and mutant hHO-1 (deltahHO-1) showed that the activity of deltahHO-1 was reduced 91.21% compared with that of whHO-1. The study shows that His25 is of importance for the mechanism of hHO-1, and provides the possibility for effectively regulating the activity to exert biological function.
Oda, Akifumi; Fukuyoshi, Shuichi
2015-06-01
The GADV hypothesis is a form of the protein world hypothesis, which suggests that life originated from proteins (Lacey et al. 1999; Ikehara 2002; Andras 2006). In the GADV hypothesis, life is thought to have originated from primitive proteins constructed of only glycine, alanine, aspartic acid, and valine ([GADV]-proteins). In this study, the three-dimensional (3D) conformations of randomly generated short [GADV]-peptides were computationally investigated using replica-exchange molecular dynamics (REMD) simulations (Sugita and Okamoto 1999). Because the peptides used in this study consisted of only 20 residues each, they could not form certain 3D structures. However, the conformational tendencies of the peptides were elucidated by analyzing the conformational ensembles generated by REMD simulations. The results indicate that secondary structures can be formed in several randomly generated [GADV]-peptides. A long helical structure was found in one of the hydrophobic peptides, supporting the conjecture of the GADV hypothesis that many peptides aggregated to form peptide multimers with enzymatic activity in the primordial soup. In addition, these results indicate that REMD simulations can be used for the structural investigation of short peptides.
On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo
Icardi, Matteo
2016-02-08
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers
Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy
Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.
Kovler, Konstantin
2006-01-01
The unique properties of radon as a noble gas are used for monitoring cement hydration and microstructural transformations in cementitious system. It is found that the radon concentration curve for hydrating cement paste enclosed in the chamber increases from zero (more accurately - background) concentrations, similar to unhydrated cement. However, radon concentrations developed within 3 days in the test chamber containing cement paste were approximately 20 times higher than those of unhydrated cement. This fact proves the importance of microstructural transformations taking place in the process of cement hydration, in comparison with cement grain, which is a time-stable material. It is concluded that monitoring cement hydration by means of radon exhalation method makes it possible to distinguish between three main stages, which are readily seen in the time dependence of radon concentration: stage I (dormant period), stage II (setting and intensive microstructural transformations) and stage III (densification of the structure and drying). The information presented improves our understanding of the main physical mechanisms resulting in the characteristic behavior of radon exhalation in the course of cement hydration. The maximum value of radon exhalation rate observed, when cement sets, can reach 0.6 mBq kg(-1) s(-1) and sometimes exceeds 1.0 mBq kg(-1) s(-1). These values exceed significantly to those known before for cementitious materials. At the same time, the minimum ventilation rate accepted in the design practice (0.5 h(-1)), guarantees that the concentrations in most of the cases will not exceed the action level and that they are not of any radiological concern for construction workers employed in concreting in closed spaces.
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.
International Nuclear Information System (INIS)
Gaveau, David L A; Leader-Williams, Nigel; Wich, Serge; Epting, Justin; Juhn, Daniel; Kanninen, Markku
2009-01-01
Payments for reduced carbon emissions from deforestation (RED) are now attracting attention as a way to halt tropical deforestation. Northern Sumatra comprises an area of 65 000 km 2 that is both the site of Indonesia's first planned RED initiative, and the stronghold of 92% of remaining Sumatran orangutans. Under current plans, this RED initiative will be implemented in a defined geographic area, essentially a newly established, 7500 km 2 protected area (PA) comprising mostly upland forest, where guards will be recruited to enforce forest protection. Meanwhile, new roads are currently under construction, while companies are converting lowland forests into oil palm plantations. This case study predicts the effectiveness of RED in reducing deforestation and conserving orangutans for two distinct scenarios: the current plan of implementing RED within the specific boundary of a new upland PA, and an alternative scenario of implementing RED across landscapes outside PAs. Our satellite-based spatially explicit deforestation model predicts that 1313 km 2 of forest would be saved from deforestation by 2030, while forest cover present in 2006 would shrink by 22% (7913 km 2 ) across landscapes outside PAs if RED were only to be implemented in the upland PA. Meanwhile, orangutan habitat would reduce by 16% (1137 km 2 ), resulting in the conservative loss of 1384 orangutans, or 25% of the current total population with or without RED intervention. By contrast, an estimated 7824 km 2 of forest could be saved from deforestation, with maximum benefit for orangutan conservation, if RED were to be implemented across all remaining forest landscapes outside PAs. Here, RED payments would compensate land users for their opportunity costs in not converting unprotected forests into oil palm, while the construction of new roads to service the marketing of oil palm would be halted. Our predictions suggest that Indonesia's first RED initiative in an upland PA may not significantly reduce
Gaveau, David L. A.; Wich, Serge; Epting, Justin; Juhn, Daniel; Kanninen, Markku; Leader-Williams, Nigel
2009-09-01
Payments for reduced carbon emissions from deforestation (RED) are now attracting attention as a way to halt tropical deforestation. Northern Sumatra comprises an area of 65 000 km2 that is both the site of Indonesia's first planned RED initiative, and the stronghold of 92% of remaining Sumatran orangutans. Under current plans, this RED initiative will be implemented in a defined geographic area, essentially a newly established, 7500 km2 protected area (PA) comprising mostly upland forest, where guards will be recruited to enforce forest protection. Meanwhile, new roads are currently under construction, while companies are converting lowland forests into oil palm plantations. This case study predicts the effectiveness of RED in reducing deforestation and conserving orangutans for two distinct scenarios: the current plan of implementing RED within the specific boundary of a new upland PA, and an alternative scenario of implementing RED across landscapes outside PAs. Our satellite-based spatially explicit deforestation model predicts that 1313 km2 of forest would be saved from deforestation by 2030, while forest cover present in 2006 would shrink by 22% (7913 km2) across landscapes outside PAs if RED were only to be implemented in the upland PA. Meanwhile, orangutan habitat would reduce by 16% (1137 km2), resulting in the conservative loss of 1384 orangutans, or 25% of the current total population with or without RED intervention. By contrast, an estimated 7824 km2 of forest could be saved from deforestation, with maximum benefit for orangutan conservation, if RED were to be implemented across all remaining forest landscapes outside PAs. Here, RED payments would compensate land users for their opportunity costs in not converting unprotected forests into oil palm, while the construction of new roads to service the marketing of oil palm would be halted. Our predictions suggest that Indonesia's first RED initiative in an upland PA may not significantly reduce
Energy Technology Data Exchange (ETDEWEB)
Gaveau, David L A; Leader-Williams, Nigel [Durrell Institute of Conservation and Ecology, University of Kent, Canterbury, Kent CT2 7NR (United Kingdom); Wich, Serge [Great Apes Trust of Iowa, 4200 SE 44th Avenue, Des Moines, IA 50320 (United States); Epting, Justin; Juhn, Daniel [Center for Applied Biodiversity Science, Conservation International, 2011 Crystal Drive, Suite 500, Arlington, VA 22202 (United States); Kanninen, Markku, E-mail: dgaveau@yahoo.co.u, E-mail: swich@greatapetrust.or, E-mail: justep22@myfastmail.co, E-mail: d.juhn@conservation.or, E-mail: m.kanninen@cgiar.or, E-mail: n.leader-williams@kent.ac.u [Center for International Forestry Research, Jalan CIFOR, Situ Gede, Sidang Barang, Bogor, West Java (Indonesia)
2009-09-15
Payments for reduced carbon emissions from deforestation (RED) are now attracting attention as a way to halt tropical deforestation. Northern Sumatra comprises an area of 65 000 km{sup 2} that is both the site of Indonesia's first planned RED initiative, and the stronghold of 92% of remaining Sumatran orangutans. Under current plans, this RED initiative will be implemented in a defined geographic area, essentially a newly established, 7500 km{sup 2} protected area (PA) comprising mostly upland forest, where guards will be recruited to enforce forest protection. Meanwhile, new roads are currently under construction, while companies are converting lowland forests into oil palm plantations. This case study predicts the effectiveness of RED in reducing deforestation and conserving orangutans for two distinct scenarios: the current plan of implementing RED within the specific boundary of a new upland PA, and an alternative scenario of implementing RED across landscapes outside PAs. Our satellite-based spatially explicit deforestation model predicts that 1313 km{sup 2} of forest would be saved from deforestation by 2030, while forest cover present in 2006 would shrink by 22% (7913 km{sup 2}) across landscapes outside PAs if RED were only to be implemented in the upland PA. Meanwhile, orangutan habitat would reduce by 16% (1137 km{sup 2}), resulting in the conservative loss of 1384 orangutans, or 25% of the current total population with or without RED intervention. By contrast, an estimated 7824 km{sup 2} of forest could be saved from deforestation, with maximum benefit for orangutan conservation, if RED were to be implemented across all remaining forest landscapes outside PAs. Here, RED payments would compensate land users for their opportunity costs in not converting unprotected forests into oil palm, while the construction of new roads to service the marketing of oil palm would be halted. Our predictions suggest that Indonesia's first RED initiative in an
How job demands, resources, and burnout predict objective performance: a constructive replication.
Bakker, Arnold B; Van Emmerik, Hetty; Van Riet, Pim
2008-07-01
The present study uses the Job Demands-Resources model (Bakker & Demerouti, 2007) to examine how job characteristics and burnout (exhaustion and cynicism) contribute to explaining variance in objective team performance. A central assumption in the model is that working characteristics evoke two psychologically different processes. In the first process, job demands lead to constant psychological overtaxing and in the long run to exhaustion. In the second process, a lack of job resources precludes actual goal accomplishment, leading to cynicism. In the present study these two processes were used to predict objective team performance. A total of 176 employees from a temporary employment agency completed questionnaires on job characteristics and burnout. These self-reports were linked to information from the company's management information system about teams' (N=71) objective sales performance (actual sales divided by the stated objectives) during the 3 months after the questionnaire data collection period. The results of structural equation modeling analyses did not support the hypothesis that exhaustion mediates the relationship between job demands and performance, but confirmed that cynicism mediates the relationship between job resources and performance suggesting that work conditions influence performance particularly through the attitudinal component of burnout.
Docherty, A R; Moscati, A; Peterson, R; Edwards, A C; Adkins, D E; Bacanu, S A; Bigdeli, T B; Webb, B T; Flint, J; Kendler, K S
2016-10-25
Biometrical genetic studies suggest that the personality dimensions, including neuroticism, are moderately heritable (~0.4 to 0.6). Quantitative analyses that aggregate the effects of many common variants have recently further informed genetic research on European samples. However, there has been limited research to date on non-European populations. This study examined the personality dimensions in a large sample of Han Chinese descent (N=10 064) from the China, Oxford, and VCU Experimental Research on Genetic Epidemiology study, aimed at identifying genetic risk factors for recurrent major depression among a rigorously ascertained cohort. Heritability of neuroticism as measured by the Eysenck Personality Questionnaire (EPQ) was estimated to be low but statistically significant at 10% (s.e.=0.03, P=0.0001). In addition to EPQ, neuroticism based on a three-factor model, data for the Big Five (BF) personality dimensions (neuroticism, openness, conscientiousness, extraversion and agreeableness) measured by the Big Five Inventory were available for controls (n=5596). Heritability estimates of the BF were not statistically significant despite high power (>0.85) to detect heritabilities of 0.10. Polygenic risk scores constructed by best linear unbiased prediction weights applied to split-half samples failed to significantly predict any of the personality traits, but polygenic risk for neuroticism, calculated with LDpred and based on predictive variants previously identified from European populations (N=171 911), significantly predicted major depressive disorder case-control status (P=0.0004) after false discovery rate correction. The scores also significantly predicted EPQ neuroticism (P=6.3 × 10 -6 ). Factor analytic results of the measures indicated that any differences in heritabilities across samples may be due to genetic variation or variation in haplotype structure between samples, rather than measurement non-invariance. Findings demonstrate that neuroticism
Sada, Andrea; Robles-García, Rebeca; Martínez-López, Nicolás; Hernández-Ramírez, Rafael; Tovilla-Zarate, Carlos-Alfonso; López-Munguía, Fernando; Suárez-Alvarez, Enrique; Ayala, Xochitl; Fresán, Ana
2016-08-01
Assessing dangerousness to gauge the likelihood of future violent behaviour has become an integral part of clinical mental health practice in forensic and non-forensic psychiatric settings, one of the most effective instruments for this being the Historical, Clinical and Risk Management-20 (HCR-20). To examine the HCR-20 factor structure in Mexican psychiatric inpatients and to obtain its predictive validity and reliability for use in this population. In total, 225 patients diagnosed with psychotic, affective or personality disorders were included. The HCR-20 was applied at hospital admission and violent behaviours were assessed during psychiatric hospitalization using the Overt Aggression Scale (OAS). Construct validity, predictive validity and internal consistency were determined. Violent behaviour remains more severe in patients classified in the high-risk group during hospitalization. Fifteen items displayed adequate communalities in the original designated domains of the HCR-20 and internal consistency of the instruments was high. The HCR-20 is a suitable instrument for predicting violence risk in Mexican psychiatric inpatients.
Directory of Open Access Journals (Sweden)
Abdelaali Rahmouni
2017-02-01
Full Text Available Natural materials (e.g. rocks and soils are porous media, whose microstructures present a wide diversity. They generally consist of a heterogeneous solid phase and a porous phase which may be fully or partially saturated with one or more fluids. The prediction of elastic and acoustic properties of porous materials is very important in many fields, such as physics of rocks, reservoir geophysics, civil engineering, construction field and study of the behavior of historical monuments. The aim of this work is to predict the elastic and acoustic behaviors of isotropic porous materials of a solid matrix containing dry, saturated and partially saturated spherical pores. For this, a homogenization technique based on the Mori–Tanaka model is presented to connect the elastic and acoustic properties to porosity and degree of water saturation. Non-destructive ultrasonic technique is used to determine the elastic properties from measurements of P-wave velocities. The results obtained show the influence of porosity and degree of water saturation on the effective properties. The various predictions of Mori–Tanaka model are then compared with experimental results for the elastic and acoustic properties of calcarenite.
Karres, Julian; Kieviet, Noera; Eerenberg, Jan-Peter; Vrouenraets, Bart C
2018-01-01
Early mortality after hip fracture surgery is high and preoperative risk assessment for the individual patient is challenging. A risk model could identify patients in need of more intensive perioperative care, provide insight in the prognosis, and allow for risk adjustment in audits. This study aimed to develop and validate a risk prediction model for 30-day mortality after hip fracture surgery: the Hip fracture Estimator of Mortality Amsterdam (HEMA). Data on 1050 consecutive patients undergoing hip fracture surgery between 2004 and 2010 were retrospectively collected and randomly split into a development cohort (746 patients) and validation cohort (304 patients). Logistic regression analysis was performed in the development cohort to determine risk factors for the HEMA. Discrimination and calibration were assessed in both cohorts using the area under the receiver operating characteristic curve (AUC), the Hosmer-Lemeshow goodness-of-fit test, and by stratification into low-, medium- and high-risk groups. Nine predictors for 30-day mortality were identified and used in the final model: age ≥85 years, in-hospital fracture, signs of malnutrition, myocardial infarction, congestive heart failure, current pneumonia, renal failure, malignancy, and serum urea >9 mmol/L. The HEMA showed good discrimination in the development cohort (AUC = 0.81) and the validation cohort (AUC = 0.79). The Hosmer-Lemeshow test indicated no lack of fit in either cohort (P > 0.05). The HEMA is based on preoperative variables and can be used to predict the risk of 30-day mortality after hip fracture surgery for the individual patient. Prognostic Level II. See Instructions for Authors for a complete description of levels of evidence.
Directory of Open Access Journals (Sweden)
Charles Rocabert
2017-03-01
Full Text Available Metabolic cross-feeding interactions between microbial strains are common in nature, and emerge during evolution experiments in the laboratory, even in homogeneous environments providing a single carbon source. In sympatry, when the environment is well-mixed, the reasons why emerging cross-feeding interactions may sometimes become stable and lead to monophyletic genotypic clusters occupying specific niches, named ecotypes, remain unclear. As an alternative to evolution experiments in the laboratory, we developed Evo2Sim, a multi-scale model of in silico experimental evolution, equipped with the whole tool case of experimental setups, competition assays, phylogenetic analysis, and, most importantly, allowing for evolvable ecological interactions. Digital organisms with an evolvable genome structure encoding an evolvable metabolic network evolved for tens of thousands of generations in environments mimicking the dynamics of real controlled environments, including chemostat or batch culture providing a single limiting resource. We show here that the evolution of stable cross-feeding interactions requires seasonal batch conditions. In this case, adaptive diversification events result in two stably co-existing ecotypes, with one feeding on the primary resource and the other on by-products. We show that the regularity of serial transfers is essential for the maintenance of the polymorphism, as it allows for at least two stable seasons and thus two temporal niches. A first season is externally generated by the transfer into fresh medium, while a second one is internally generated by niche construction as the provided nutrient is replaced by secreted by-products derived from bacterial growth. In chemostat conditions, even if cross-feeding interactions emerge, they are not stable on the long-term because fitter mutants eventually invade the whole population. We also show that the long-term evolution of the two stable ecotypes leads to character
Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai
2017-09-01
Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.
Directory of Open Access Journals (Sweden)
Lucy L Brown
Full Text Available Four suites of behavioral traits have been associated with four broad neural systems: the 1 dopamine and related norepinephrine system; 2 serotonin; 3 testosterone; 4 and estrogen and oxytocin system. A 56-item questionnaire, the Fisher Temperament Inventory (FTI, was developed to define four temperament dimensions associated with these behavioral traits and neural systems. The questionnaire has been used to suggest romantic partner compatibility. The dimensions were named: Curious/Energetic; Cautious/Social Norm Compliant; Analytical/Tough-minded; and Prosocial/Empathetic. For the present study, the FTI was administered to participants in two functional magnetic resonance imaging studies that elicited feelings of love and attachment, near-universal human experiences. Scores for the Curious/Energetic dimension co-varied with activation in a region of the substantia nigra, consistent with the prediction that this dimension reflects activity in the dopamine system. Scores for the Cautious/Social Norm Compliant dimension correlated with activation in the ventrolateral prefrontal cortex in regions associated with social norm compliance, a trait linked with the serotonin system. Scores on the Analytical/Tough-minded scale co-varied with activity in regions of the occipital and parietal cortices associated with visual acuity and mathematical thinking, traits linked with testosterone. Also, testosterone contributes to brain architecture in these areas. Scores on the Prosocial/Empathetic scale correlated with activity in regions of the inferior frontal gyrus, anterior insula and fusiform gyrus. These are regions associated with mirror neurons or empathy, a trait linked with the estrogen/oxytocin system, and where estrogen contributes to brain architecture. These findings, replicated across two studies, suggest that the FTI measures influences of four broad neural systems, and that these temperament dimensions and neural systems could constitute
Brown, Lucy L; Acevedo, Bianca; Fisher, Helen E
2013-01-01
Four suites of behavioral traits have been associated with four broad neural systems: the 1) dopamine and related norepinephrine system; 2) serotonin; 3) testosterone; 4) and estrogen and oxytocin system. A 56-item questionnaire, the Fisher Temperament Inventory (FTI), was developed to define four temperament dimensions associated with these behavioral traits and neural systems. The questionnaire has been used to suggest romantic partner compatibility. The dimensions were named: Curious/Energetic; Cautious/Social Norm Compliant; Analytical/Tough-minded; and Prosocial/Empathetic. For the present study, the FTI was administered to participants in two functional magnetic resonance imaging studies that elicited feelings of love and attachment, near-universal human experiences. Scores for the Curious/Energetic dimension co-varied with activation in a region of the substantia nigra, consistent with the prediction that this dimension reflects activity in the dopamine system. Scores for the Cautious/Social Norm Compliant dimension correlated with activation in the ventrolateral prefrontal cortex in regions associated with social norm compliance, a trait linked with the serotonin system. Scores on the Analytical/Tough-minded scale co-varied with activity in regions of the occipital and parietal cortices associated with visual acuity and mathematical thinking, traits linked with testosterone. Also, testosterone contributes to brain architecture in these areas. Scores on the Prosocial/Empathetic scale correlated with activity in regions of the inferior frontal gyrus, anterior insula and fusiform gyrus. These are regions associated with mirror neurons or empathy, a trait linked with the estrogen/oxytocin system, and where estrogen contributes to brain architecture. These findings, replicated across two studies, suggest that the FTI measures influences of four broad neural systems, and that these temperament dimensions and neural systems could constitute foundational mechanisms
Brown, Adam J; Teng, Zhongzhao; Calvert, Patrick A; Rajani, Nikil K; Hennessy, Orla; Nerlekar, Nitesh; Obaid, Daniel R; Costopoulos, Charis; Huang, Yuan; Hoole, Stephen P; Goddard, Martin; West, Nick E J; Gillard, Jonathan H; Bennett, Martin R
2016-06-01
Although plaque rupture is responsible for most myocardial infarctions, few high-risk plaques identified by intracoronary imaging actually result in future major adverse cardiovascular events (MACE). Nonimaging markers of individual plaque behavior are therefore required. Rupture occurs when plaque structural stress (PSS) exceeds material strength. We therefore assessed whether PSS could predict future MACE in high-risk nonculprit lesions identified on virtual-histology intravascular ultrasound. Baseline nonculprit lesion features associated with MACE during long-term follow-up (median: 1115 days) were determined in 170 patients undergoing 3-vessel virtual-histology intravascular ultrasound. MACE was associated with plaque burden ≥70% (hazard ratio: 8.6; 95% confidence interval, 2.5-30.6; P<0.001) and minimal luminal area ≤4 mm(2) (hazard ratio: 6.6; 95% confidence interval, 2.1-20.1; P=0.036), although absolute event rates for high-risk lesions remained <10%. PSS derived from virtual-histology intravascular ultrasound was subsequently estimated in nonculprit lesions responsible for MACE (n=22) versus matched control lesions (n=22). PSS showed marked heterogeneity across and between similar lesions but was significantly increased in MACE lesions at high-risk regions, including plaque burden ≥70% (13.9±11.5 versus 10.2±4.7; P<0.001) and thin-cap fibroatheroma (14.0±8.9 versus 11.6±4.5; P=0.02). Furthermore, PSS improved the ability of virtual-histology intravascular ultrasound to predict MACE in plaques with plaque burden ≥70% (adjusted log-rank, P=0.003) and minimal luminal area ≤4 mm(2) (P=0.002). Plaques responsible for MACE had larger superficial calcium inclusions, which acted to increase PSS (P<0.05). Baseline PSS is increased in plaques responsible for MACE and improves the ability of intracoronary imaging to predict events. Biomechanical modeling may complement plaque imaging for risk stratification of coronary nonculprit lesions. © 2016
Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.
2013-06-01
Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that
Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2016-12-01
Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.
Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.
2015-01-01
Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error
Yeates, E.; Dreaper, G.; Afshari, S.; Tavakoly, A. A.
2017-12-01
Over the past six fiscal years, the United States Army Corps of Engineers (USACE) has contracted an average of about a billion dollars per year for navigation channel dredging. To execute these funds effectively, USACE Districts must determine which navigation channels need to be dredged in a given year. Improving this prioritization process results in more efficient waterway maintenance. This study uses the Streamflow Prediction Tool, a runoff routing model based on global weather forecast ensembles, to estimate dredged volumes. This study establishes regional linear relationships between cumulative flow and dredged volumes over a long-term simulation covering 30 years (1985-2015), using drainage area and shoaling parameters. The study framework integrates the National Hydrography Dataset (NHDPlus Dataset) with parameters from the Corps Shoaling Analysis Tool (CSAT) and dredging record data from USACE District records. Results in the test cases of the Houston Ship Channel and the Sabine and Port Arthur Harbor waterways in Texas indicate positive correlation between the simulated streamflows and actual dredging records.
Predicting traffic volumes and estimating the effects of shocks in massive transportation systems.
Silva, Ricardo; Kang, Soong Moon; Airoldi, Edoardo M
2015-05-05
Public transportation systems are an essential component of major cities. The widespread use of smart cards for automated fare collection in these systems offers a unique opportunity to understand passenger behavior at a massive scale. In this study, we use network-wide data obtained from smart cards in the London transport system to predict future traffic volumes, and to estimate the effects of disruptions due to unplanned closures of stations or lines. Disruptions, or shocks, force passengers to make different decisions concerning which stations to enter or exit. We describe how these changes in passenger behavior lead to possible overcrowding and model how stations will be affected by given disruptions. This information can then be used to mitigate the effects of these shocks because transport authorities may prepare in advance alternative solutions such as additional buses near the most affected stations. We describe statistical methods that leverage the large amount of smart-card data collected under the natural state of the system, where no shocks take place, as variables that are indicative of behavior under disruptions. We find that features extracted from the natural regime data can be successfully exploited to describe different disruption regimes, and that our framework can be used as a general tool for any similar complex transportation system.
Directory of Open Access Journals (Sweden)
Tegan ePenton
2014-08-01
Full Text Available The debate on the existence of free will is on-going. Seminal findings by Libet et al. demonstrate that subjective awareness of a voluntary urge to act (the W-judgement occurs before action execution. Libet’s paradigm requires participants to perform voluntary actions while watching a clock hand rotate. On response trials, participants make a retrospective judgement related to awareness of their urge to act. This research investigates the relationship between individual differences in performance on the Libet task and self-awareness. We examined the relationship between W-judgement, Attributional Style (AS; a measure of perceived control and interoceptive sensitivity (IS; awareness of stimuli originating from one’s body; e.g. heartbeats. Thirty participants completed the AS questionnaire (ASQ, a heartbeat estimation task (IS, and the Libet paradigm. The ASQ score significantly predicted performance on the Libet task, while IS did not - more negative ASQ scores indicated larger latency between W-judgement and action execution. A significant correlation was also observed between ASQ score and IS. This is the first research to report a relationship between W-judgement and AS and should inform the future use of electroencephalography to investigate the relationship between AS, W-judgement and RP onset. Our findings raise questions surrounding the importance of one’s perceived control in determining the point of conscious intention to act. Furthermore, we demonstrate possible negative implications associated with a longer period between conscious awareness and action execution.
Directory of Open Access Journals (Sweden)
Ruixian Fang
2016-09-01
Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.
Directory of Open Access Journals (Sweden)
Lijuan Cui
2016-11-01
Full Text Available We monitored the water quality and hydrological conditions of a horizontal subsurface constructed wetland (HSSF-CW in Beijing, China, for two years. We simulated the area-based constant and the temperature coefficient with the first-order kinetic model. We examined the relationships between the nitrogen (N removal rate, N load, seasonal variations in the N removal rate, and environmental factors—such as the area-based constant, temperature, and dissolved oxygen (DO. The effluent ammonia (NH4+-N and nitrate (NO3−-N concentrations were significantly lower than the influent concentrations (p < 0.01, n = 38. The NO3−-N load was significantly correlated with the removal rate (R2 = 0.96, p < 0.01, but the NH4+-N load was not correlated with the removal rate (R2 = 0.02, p > 0.01. The area-based constants of NO3−-N and NH4+-N at 20 °C were 27 ± 26 (mean ± SD and 14 ± 10 m∙year−1, respectively. The temperature coefficients for NO3−-N and NH4+-N were estimated at 1.004 and 0.960, respectively. The area-based constants for NO3−-N and NH4+-N were not correlated with temperature (p > 0.01. The NO3−-N area-based constant was correlated with the corresponding load (R2 = 0.96, p < 0.01. The NH4+-N area rate was correlated with DO (R2 = 0.69, p < 0.01, suggesting that the factors that influenced the N removal rate in this wetland met Liebig’s law of the minimum.
Seong, Ki Moon; Park, Hweon; Kim, Seong Jung; Ha, Hyo Nam; Lee, Jae Yung; Kim, Joon
2007-06-01
A yeast transcriptional activator, Gcn4p, induces the expression of genes that are involved in amino acid and purine biosynthetic pathways under amino acid starvation. Gcn4p has an acidic activation domain in the central region and a bZIP domain in the C-terminus that is divided into the DNA-binding motif and dimerization leucine zipper motif. In order to identify amino acids in the DNA-binding motif of Gcn4p which are involved in transcriptional activation, we constructed mutant libraries in the DNA-binding motif through an innovative application of random mutagenesis. Mutant library made by oligonucleotides which were mutated randomly using the Poisson distribution showed that the actual mutation frequency was in good agreement with expected values. This method could save the time and effort to create a mutant library with a predictable mutation frequency. Based on the studies using the mutant libraries constructed by the new method, the specific residues of the DNA-binding domain in Gcn4p appear to be involved in the transcriptional activities on a conserved binding site.
Directory of Open Access Journals (Sweden)
Kyu Sik Jung
Full Text Available Preoperative liver stiffness (LS measurement using transient elastography (TE is useful for predicting late recurrence after curative resection of hepatocellular carcinoma (HCC. We developed and validated a novel LS value-based predictive model for late recurrence of HCC.Patients who were due to undergo curative resection of HCC between August 2006 and January 2010 were prospectively enrolled and TE was performed prior to operations by study protocol. The predictive model of late recurrence was constructed based on a multiple logistic regression model. Discrimination and calibration were used to validate the model.Among a total of 139 patients who were finally analyzed, late recurrence occurred in 44 patients, with a median follow-up of 24.5 months (range, 12.4-68.1. We developed a predictive model for late recurrence of HCC using LS value, activity grade II-III, presence of multiple tumors, and indocyanine green retention rate at 15 min (ICG R15, which showed fairly good discrimination capability with an area under the receiver operating characteristic curve (AUROC of 0.724 (95% confidence intervals [CIs], 0.632-0.816. In the validation, using a bootstrap method to assess discrimination, the AUROC remained largely unchanged between iterations, with an average AUROC of 0.722 (95% CIs, 0.718-0.724. When we plotted a calibration chart for predicted and observed risk of late recurrence, the predicted risk of late recurrence correlated well with observed risk, with a correlation coefficient of 0.873 (P<0.001.A simple LS value-based predictive model could estimate the risk of late recurrence in patients who underwent curative resection of HCC.
Li, Aihua; Dhakal, Shital; Glenn, Nancy F.; Spaete, Luke P.; Shinneman, Douglas; Pilliod, David S.; Arkle, Robert; McIlroy, Susan
2017-01-01
Our study objectives were to model the aboveground biomass in a xeric shrub-steppe landscape with airborne light detection and ranging (Lidar) and explore the uncertainty associated with the models we created. We incorporated vegetation vertical structure information obtained from Lidar with ground-measured biomass data, allowing us to scale shrub biomass from small field sites (1 m subplots and 1 ha plots) to a larger landscape. A series of airborne Lidar-derived vegetation metrics were trained and linked with the field-measured biomass in Random Forests (RF) regression models. A Stepwise Multiple Regression (SMR) model was also explored as a comparison. Our results demonstrated that the important predictors from Lidar-derived metrics had a strong correlation with field-measured biomass in the RF regression models with a pseudo R2 of 0.76 and RMSE of 125 g/m2 for shrub biomass and a pseudo R2 of 0.74 and RMSE of 141 g/m2 for total biomass, and a weak correlation with field-measured herbaceous biomass. The SMR results were similar but slightly better than RF, explaining 77–79% of the variance, with RMSE ranging from 120 to 129 g/m2 for shrub and total biomass, respectively. We further explored the computational efficiency and relative accuracies of using point cloud and raster Lidar metrics at different resolutions (1 m to 1 ha). Metrics derived from the Lidar point cloud processing led to improved biomass estimates at nearly all resolutions in comparison to raster-derived Lidar metrics. Only at 1 m were the results from the point cloud and raster products nearly equivalent. The best Lidar prediction models of biomass at the plot-level (1 ha) were achieved when Lidar metrics were derived from an average of fine resolution (1 m) metrics to minimize boundary effects and to smooth variability. Overall, both RF and SMR methods explained more than 74% of the variance in biomass, with the most important Lidar variables being associated with vegetation structure
Directory of Open Access Journals (Sweden)
Xubin Ping
2015-01-01
Full Text Available For the quasi-linear parameter varying (quasi-LPV system with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC is investigated. The estimation error set is represented by a zonotope and refreshed by the zonotopic set-membership estimation method. By properly refreshing the estimation error set online, the bounds of true state at the next sampling time can be obtained. Furthermore, the feasibility of the main optimization problem at the next sampling time can be determined at the current time. A numerical example is given to illustrate the effectiveness of the approach.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Predictive Sea State Estimation for Automated Ride Control and Handling - PSSEARCH
Huntsberger, Terrance L.; Howard, Andrew B.; Aghazarian, Hrand; Rankin, Arturo L.
2012-01-01
PSSEARCH provides predictive sea state estimation, coupled with closed-loop feedback control for automated ride control. It enables a manned or unmanned watercraft to determine the 3D map and sea state conditions in its vicinity in real time. Adaptive path-planning/ replanning software and a control surface management system will then use this information to choose the best settings and heading relative to the seas for the watercraft. PSSEARCH looks ahead and anticipates potential impact of waves on the boat and is used in a tight control loop to adjust trim tabs, course, and throttle settings. The software uses sensory inputs including IMU (Inertial Measurement Unit), stereo, radar, etc. to determine the sea state and wave conditions (wave height, frequency, wave direction) in the vicinity of a rapidly moving boat. This information can then be used to plot a safe path through the oncoming waves. The main issues in determining a safe path for sea surface navigation are: (1) deriving a 3D map of the surrounding environment, (2) extracting hazards and sea state surface state from the imaging sensors/map, and (3) planning a path and control surface settings that avoid the hazards, accomplish the mission navigation goals, and mitigate crew injuries from excessive heave, pitch, and roll accelerations while taking into account the dynamics of the sea surface state. The first part is solved using a wide baseline stereo system, where 3D structure is determined from two calibrated pairs of visual imagers. Once the 3D map is derived, anything above the sea surface is classified as a potential hazard and a surface analysis gives a static snapshot of the waves. Dynamics of the wave features are obtained from a frequency analysis of motion vectors derived from the orientation of the waves during a sequence of inputs. Fusion of the dynamic wave patterns with the 3D maps and the IMU outputs is used for efficient safe path planning.
Estimating the magnitude of prediction uncertainties for field-scale P loss models
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, an uncertainty analysis for the Annual P Loss Estima...
Energy Technology Data Exchange (ETDEWEB)
NONE
1993-11-30
The report provides an estimate of the cost and associated schedule to construct the tunnel and shaft remedial shielding concept. The cost and schedule estimate is based on a preliminary concept intended to address the potential radiation effects on Line D and Line Facilities in event of a beam spill. The construction approach utilizes careful tunneling methods based on available excavation and ground support technology. The tunneling rates and overall productivity on which the cost and project schedule are estimated are based on conservative assumptions with appropriate contingencies to address the uncertainty associated with geological conditions. The report is intended to provide supplemental information which will assist in assessing the feasibility of the tunnel and shaft concept and justification for future development of this particular aspect of remedial shielding for Line D and Line D Facilities.
Stuiver, Martijn M.; Kampshoff, Caroline S.; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J. M.; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M.
2017-01-01
Objective: To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo2(peak)) and peak power output (W-peak).&
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
Gusev, E. V.; Mukhametzyanov, Z. R.; Razyapov, R. V.
2017-11-01
The problems of the existing methods for the determination of combining and technologically interlinked construction processes and activities are considered under the modern construction conditions of various facilities. The necessity to identify common parameters that characterize the interaction nature of all the technology-related construction and installation processes and activities is shown. The research of the technologies of construction and installation processes for buildings and structures with the goal of determining a common parameter for evaluating the relationship between technologically interconnected processes and construction works are conducted. The result of this research was to identify the quantitative evaluation of interaction construction and installation processes and activities in a minimum technologically necessary volume of the previous process allowing one to plan and organize the execution of a subsequent technologically interconnected process. The quantitative evaluation is used as the basis for the calculation of the optimum range of the combination of processes and activities. The calculation method is based on the use of the graph theory. The authors applied a generic characterization parameter to reveal the technological links between construction and installation processes, and the proposed technique has adaptive properties which are key for wide use in organizational decisions forming. The article provides a written practical significance of the developed technique.
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)
2016-07-05
Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.
International Nuclear Information System (INIS)
Ma, Denglong; Zhang, Zaoxiao
2016-01-01
Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.
International Nuclear Information System (INIS)
Lee, S.H.; Moon, B.S.; Lee, J.H.
2014-01-01
The Earned Value Management System (EVMS) is a project management technique for measuring project performance and progress, and then forward projection through the integrated management and control of cost and schedule. This research reviewed the concept of the EVMS method, and proposes two Planned Value estimation methods for the potential application to succeeding NPP construction projects by using the historical data from the proceeding NPP projects. This paper is to introduce the solution for the problems caused by the absence of relevant management system incorporating schedule and cost, which has arisen as repeated issues in NPP construction project management. (author)
Energy Technology Data Exchange (ETDEWEB)
Lee, S.H.; Moon, B.S., E-mail: gustblast@khnp.co.kr, E-mail: moonbs@khnp.co.kr [Korea Hydro & Nuclear power co.,Ltd., Central Research Inst., Daejeon (Korea, Republic of); Lee, J.H., E-mail: ljh@kkprotech.com [Kong Kwan Protech Co.,Ltd., Seoul (Korea, Republic of)
2014-07-01
The Earned Value Management System (EVMS) is a project management technique for measuring project performance and progress, and then forward projection through the integrated management and control of cost and schedule. This research reviewed the concept of the EVMS method, and proposes two Planned Value estimation methods for the potential application to succeeding NPP construction projects by using the historical data from the proceeding NPP projects. This paper is to introduce the solution for the problems caused by the absence of relevant management system incorporating schedule and cost, which has arisen as repeated issues in NPP construction project management. (author)
Roos, Corey R; Mann, Karl; Witkiewitz, Katie
2017-11-01
Researchers have sought to distinguish between individuals whose alcohol use disorder (AUD) is maintained by drinking to relieve negative affect ('relief drinkers') and those whose AUD is maintained by the rewarding effects of alcohol ('reward drinkers'). As an opioid receptor antagonist, naltrexone may be particularly effective for reward drinkers. Acamprosate, which has been shown to down-regulate the glutamatergic system, may be particularly effective for relief drinkers. This study sought to replicate and extend prior work (PREDICT study; Glöckner-Rist et al. ) by examining dimensions of reward and relief temptation to drink and subtypes of individuals with distinct patterns of reward/relief temptation. We utilized data from two randomized clinical trials for AUD (Project MATCH, n = 1726 and COMBINE study, n = 1383). We also tested whether classes of reward/relief temptation would predict differential response to naltrexone and acamprosate in COMBINE. Results replicated prior work by identifying reward and relief temptation factors, which had excellent reliability and construct validity. Using factor mixture modeling, we identified five distinct classes of reward/relief temptation that replicated across studies. In COMBINE, we found a significant class-by-acamprosate interaction effect. Among those most likely classified in the high relief/moderate reward temptation class, individuals had better drinking outcomes if assigned to acamprosate versus placebo. We did not find a significant class-by-naltrexone interaction effect. Our study questions the orthogonal classification of drinkers into only two types (reward or relief drinkers) and adds to the body of research on moderators of acamprosate, which may inform clinical decision making in the treatment of AUD. © 2016 Society for the Study of Addiction.
van Montfort, Eveline; Denollet, Johan; Widdershoven, Jos; Kupper, Nina
2016-09-01
In cardiac patients, positive psychological factors have been associated with improved medical and psychological outcomes. The current study examined the interrelation between and independence of multiple positive and negative psychological constructs. Furthermore, the potential added predictive value of positive psychological functioning regarding the prediction of patients' treatment adherence and participation in cardiac rehabilitation (CR) was investigated. 409 percutaneous coronary intervention (PCI) patients were included (mean age = 65.6 ± 9.5; 78% male). Self-report questionnaires were administered one month post-PCI. Positive psychological constructs included positive affect (GMS) and optimism (LOT-R); negative constructs were depression (PHQ-9, BDI), anxiety (GAD-7) and negative affect (GMS). Six months post-PCI self-reported general adherence (MOS) and CR participation were determined. Factor Analysis (Oblimin rotation) revealed two components (r = − 0.56), reflecting positive and negative psychological constructs. Linear regression analyses showed that in unadjusted analyses both optimism and positive affect were associated with better general treatment adherence at six months (p psychological constructs (i.e. optimism) may be of incremental value to negative psychological constructs in predicting patients' treatment adherence. A more complete view of a patients' psychological functioning will open new avenues for treatment. Additional research is needed to investigate the relationship between positive psychological factors and other cardiac outcomes, such as cardiac events and mortality.
Directory of Open Access Journals (Sweden)
Chuanqiang Yu
2015-01-01
Full Text Available Deteriorating systems, which are subject to both continuous smooth degradation and additional abrupt damages due to a shock process, can be often encountered in engineering. Modeling the degradation evolution and predicting the lifetime of this kind of systems are both interesting and challenging in practice. In this paper, we model the degradation trajectory of the deteriorating system by a random coefficient regression (RCR model with positive jumps, where the RCR part is used to model the continuous smooth degradation of the system and the jump part is used to characterize the abrupt damages due to random shocks. Based on a specified threshold level, the probability density function (PDF and cumulative distribution function (CDF of the lifetime can be derived analytically. The unknown parameters associated with the derived lifetime distributions can be estimated via a well-designed parameter estimation procedure on the basis of the available degradation recordings of the deteriorating systems. An illustrative example is finally provided to demonstrate the implementation and superiority of the newly proposed lifetime prediction method. The experimental results reveal that our proposed lifetime prediction method with the dedicated parameter estimation strategy can get more accurate lifetime predictions than the rival model in literature.
International Nuclear Information System (INIS)
Li, Yanhao; Wang, Guangjun; Chen, Hong
2015-01-01
The predictive control theory is utilized for the research of a simultaneous estimation of heat fluxes through the upper, side and lower surface of a steel slab in a walking beam type rolling steel reheating furnace. An inverse algorithm based on dynamic matrix control (DMC) is established. That is, each surface heat flux of a slab is simultaneously estimated through rolling optimization on the basis of temperature measurements in selected points of its interior by utilizing step response function as predictive model of a slab's temperature. The reliability of the DMC results is enhanced without prior assuming specific functions of heat fluxes over a period of future time. The inverse algorithm proposed a respective regularization to effectively improve the stability of the estimated results by considering obvious strength differences between the upper as well as lower and side surface heat fluxes of the slab. - Highlights: • The predictive control theory is adopted. • An inversion scheme based on DMC is established. • Upper, side and lower surface heat fluxes of slab are estimated based DMC. • A respective regularization is proposed to improve the stability of results
F. Mauro; Vicente Monleon; H. Temesgen
2015-01-01
Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...
Predictive 3D search algorithm for multi-frame motion estimation
Lim, Hong Yin; Kassim, A.A.; With, de P.H.N.
2008-01-01
Multi-frame motion estimation introduced in recent video standards such as H.264/AVC, helps to improve the rate-distortion performance and hence the video quality. This, however, comes at the expense of having a much higher computational complexity. In multi-frame motion estimation, there exists
Morera, Serni; Corominas, Lluís; Rigola, Miquel; Poch, Manel; Comas, Joaquim
2017-10-01
The aim of this work is to quantify the relative contribution to the overall environmental impact of the construction phase compared to the operational phase for a large conventional activated sludge wastewater treatment plant (WWTP). To estimate these environmental impacts, a systematic procedure was designed to obtain the detailed Life Cycle Inventories (LCI) for civil works and equipment, taking as starting point the construction project budget and the list of equipment installed at the Girona WWTP, which are the most reliable information sources of materials and resources used during the construction phase. A detailed inventory is conducted by including 45 materials for civil works and 1,240 devices for the equipment. For most of the impact categories and different life spans of the WWTP, the contribution of the construction phase to the overall burden is higher than 5% and, especially for metal depletion, the impact of construction reaches 63%. When comparing to the WWTP inventories available in Ecoinvent the share of construction obtained in this work is about 3 times smaller for climate change and twice higher for metal depletion. Concrete and reinforcing steel are the materials with the highest contribution to the civil works phase and motors, pumps and mobile and transport equipment are also key equipment to consider during life cycle inventories of WWTPs. Additional robust inventories for similar WWTP can leverage this work by applying the factors (kg of materials and energy per m 3 of treated water) and guidance provided. Copyright © 2017 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Giansetti, P.
2005-09-15
Spark ignition engine control has become a major issue regarding compliance with emissions legislation while ensuring driving comfort. The objective of this thesis was to estimate the mass and composition of gases inside the cylinder of an engine based on physics in order to insure better control of transient phases taking into account residual gases as well as exhaust gas recirculation. Residual gas fraction has been characterized using two experiments and one CFD code. A model has been validated experimentally and integrated into an observer which predicts pressure and temperature inside the manifold. The predictions of the different gas flows and the chemical species inside the cylinder are deduced. A closed loop observer has been validated experimentally and in simulation. Moreover, an algorithm estimating the fresh and burned gas mass from the cylinder pressure has been proposed in order to obtain the information cycle by cycle and cylinder by cylinder. (author)
Burkhart, B R; Green, S B; Harrison, W H
1979-04-01
Examined the predictive validity and construct equivalence of the three major procedures used to measure assertive behavior: Self-report, behavioral role-playing, and in-vivo assessment. Seventy-five Ss, who spanned the range of assertiveness, completed two self-report measures of assertiveness, the Rathus Assertiveness Scale (RAS) and the College Self-Expression Scale (CSES); two scales from the Endler S-R Inventory of General Trait Anxiousness, the interpersonal and general anxiety scales; eight role-playing situations that involved the expression of positive and negative assertiveness; and a telephone in-vivo task. In general, the study revealed the following: (1) assertiveness measures are task-dependent in that there was more overlap within task than between tasks; (2) there is a moderate degree of correspondence between self-report and role-playing measures, although this was true only for negative assertion; (3) positive and negative assertion do not appear to have the same topography of responding; and (4) there appears to be no consistent relationship between the in-vivo measure and any other type of assertiveness measure.
Directory of Open Access Journals (Sweden)
Shira Barzilay
2018-04-01
3-factor structure. It demonstrates construct validity for assessing distinct suicide-related countertransference to psychiatric outpatients. Mental health professionals’ emotional responses to their patients are concurrently indicative and prospectively predictive of suicidal thoughts and behaviors. Thus, the TRQ-SF is a useful tool for the study of countertransference in the treatment of suicidal patients and may help clinicians make diagnostic and therapeutic use of their own responses to improve assessment and intervention for individual suicidal patients.
Barzilay, Shira; Yaseen, Zimri S; Hawes, Mariah; Gorman, Bernard; Altman, Rachel; Foster, Adriana; Apter, Alan; Rosenfield, Paul; Galynker, Igor
2018-01-01
construct validity for assessing distinct suicide-related countertransference to psychiatric outpatients. Mental health professionals' emotional responses to their patients are concurrently indicative and prospectively predictive of suicidal thoughts and behaviors. Thus, the TRQ-SF is a useful tool for the study of countertransference in the treatment of suicidal patients and may help clinicians make diagnostic and therapeutic use of their own responses to improve assessment and intervention for individual suicidal patients.
Directory of Open Access Journals (Sweden)
Luigi Capoferri
Full Text Available Prediction of human Cytochrome P450 (CYP binding affinities of small ligands, i.e., substrates and inhibitors, represents an important task for predicting drug-drug interactions. A quantitative assessment of the ligand binding affinity towards different CYPs can provide an estimate of inhibitory activity or an indication of isoforms prone to interact with the substrate of inhibitors. However, the accuracy of global quantitative models for CYP substrate binding or inhibition based on traditional molecular descriptors can be limited, because of the lack of information on the structure and flexibility of the catalytic site of CYPs. Here we describe the application of a method that combines protein-ligand docking, Molecular Dynamics (MD simulations and Linear Interaction Energy (LIE theory, to allow for quantitative CYP affinity prediction. Using this combined approach, a LIE model for human CYP 1A2 was developed and evaluated, based on a structurally diverse dataset for which the estimated experimental uncertainty was 3.3 kJ mol-1. For the computed CYP 1A2 binding affinities, the model showed a root mean square error (RMSE of 4.1 kJ mol-1 and a standard error in prediction (SDEP in cross-validation of 4.3 kJ mol-1. A novel approach that includes information on both structural ligand description and protein-ligand interaction was developed for estimating the reliability of predictions, and was able to identify compounds from an external test set with a SDEP for the predicted affinities of 4.6 kJ mol-1 (corresponding to 0.8 pKi units.
International Nuclear Information System (INIS)
Díaz, Santiago; Carta, José A.; Matías, José M.
2017-01-01
Highlights: • Eight measure-correlate-predict (MCP) models used to estimate the wind power densities (WPDs) at a target site are compared. • Support vector regressions are used as the main prediction techniques in the proposed MCPs. • The most precise MCP uses two sub-models which predict wind speed and air density in an unlinked manner. • The most precise model allows to construct a bivariable (wind speed and air density) WPD probability density function. • MCP models trained to minimise wind speed prediction error do not minimise WPD prediction error. - Abstract: The long-term annual mean wind power density (WPD) is an important indicator of wind as a power source which is usually included in regional wind resource maps as useful prior information to identify potentially attractive sites for the installation of wind projects. In this paper, a comparison is made of eight proposed Measure-Correlate-Predict (MCP) models to estimate the WPDs at a target site. Seven of these models use the Support Vector Regression (SVR) and the eighth the Multiple Linear Regression (MLR) technique, which serves as a basis to compare the performance of the other models. In addition, a wrapper technique with 10-fold cross-validation has been used to select the optimal set of input features for the SVR and MLR models. Some of the eight models were trained to directly estimate the mean hourly WPDs at a target site. Others, however, were firstly trained to estimate the parameters on which the WPD depends (i.e. wind speed and air density) and then, using these parameters, the target site mean hourly WPDs. The explanatory features considered are different combinations of the mean hourly wind speeds, wind directions and air densities recorded in 2014 at ten weather stations in the Canary Archipelago (Spain). The conclusions that can be drawn from the study undertaken include the argument that the most accurate method for the long-term estimation of WPDs requires the execution of a
Lopes Rosado, Eliane; Santiago de Brito, Roberta; Bressan, Josefina; Martínez Hernández, José Alfredo
2014-01-01
Objective: To assess the adequacy of predictive equations for estimation of energy expenditure (EE), compared with the EE using indirect calorimetry in a sample of Brazilian and Spanish women with excess body weight Methods: It is a cross-sectional study with 92 obese adult women [26 Brazilian -G1- and 66 Spanish - G2- (aged 20-50)]. Weight and height were evaluated during fasting for the calculation of body mass index and predictive equations. EE was evaluated using the open-circuit indirect...
DEFF Research Database (Denmark)
Olsen, Michael; Greve, Sara; Blicher, Marie
2016-01-01
OBJECTIVE: Carotid-femoral pulse wave velocity (cfPWV) adds significantly to traditional cardiovascular (CV) risk prediction, but is not widely available. Therefore, it would be helpful if cfPWV could be replaced by an estimated carotid-femoral pulse wave velocity (ePWV) using age and mean blood...... pressure and previously published equations. The aim of this study was to investigate whether ePWV could predict CV events independently of traditional cardiovascular risk factors and/or cfPWV. DESIGN AND METHOD: cfPWV was measured and ePWV calculated in 2366 apparently healthy subjects from four age...
International Nuclear Information System (INIS)
Wang, Yujie; Liu, Chang; Pan, Rui; Chen, Zonghai
2017-01-01
The modeling and state-of-charge estimation of the batteries and ultracapacitors are crucial to the battery/ultracapacitor hybrid energy storage system. In recent years, the model based state estimators are welcomed widely, since they can adjust the gain according to the error between the model predictions and measurements timely. In most of the existing algorithms, the model parameters are either configured by theoretical values or identified off-line without adaption. But in fact, the model parameters always change continuously with loading wave or self-aging, and the lack of adaption will reduce the estimation accuracy significantly. To overcome this drawback, a novel co-estimator is proposed to estimate the model parameters and state-of-charge simultaneously. The extended Kalman filter is employed for parameter updating. To reduce the convergence time, the recursive least square algorithm and the off-line identification method are used to provide initial values with small deviation. The unscented Kalman filter is employed for the state-of-charge estimation. Because the unscented Kalman filter takes not only the measurement uncertainties but also the process uncertainties into account, it is robust to the noise. Experiments are executed to explore the robustness, stability and precision of the proposed method. - Highlights: • A co-estimator is proposed to estimate the model parameters and state-of-charge. • The extended Kalman filter is used for model parameter adaption. • The unscented Kalman filter is designed for state estimation with strong robust. • The dynamic profiles are employed to verify the proposed co-estimator.
DEFF Research Database (Denmark)
Löwe, Roland; Mikkelsen, Peter Steen; Madsen, Henrik
2014-01-01
Probabilistic runoff forecasts generated by stochastic greybox models can be notably useful for the improvement of the decision-making process in real-time control setups for urban drainage systems because the prediction risk relationships in these systems are often highly nonlinear. To date...... the identification of models for cases with noisy in-sewer observations. For the prediction of the overflow risk, no improvement was demonstrated through the application of stochastic forecasts instead of point predictions, although this result is thought to be caused by the notably simplified setup used...
Merler, E; Bressan, Vittoria; Somigliana, Anna
2009-01-01
Work in the construction industry is causing the highest number of mesotheliomas among the residents of the Veneto Region (north-east Italy, 4,5 million inhabitants). To sum up the results on occurrence, asbestos exposure, lung fibre content analyses, and compensation for occupational disease. Case identification and asbestos exposure classification: active search of mesotheliomas that were diagnosed via histological or cytological examinations occurring between 1987 and 2006; a probability of asbestos exposure was attributed to each case, following interviews with the subjects or their relatives and collection of data on the jobs held over their lifetime. Risk estimate among construction workers: the ratio between cases and person-years, the latter derived from the number of construction workers reported by censuses. Lung content of asbestos fibres: examination of lung specimens by Scanning Electron Microscope to determine number and type of fibres. Claims for compensation and compensation awarded: data obtained from the National Institute for Insurance against Occupational Diseases available for the period 1999-2006. of 952 mesothelioma cases classified as due to asbestos exposure, 251 were assigned to work in the construction industry (21 of which due to domestic of environmental exposures), which gives a rate of 4.1 (95% CI 3.6-4.8) x 10(5) x year among construction workers. The asbestos fibre content detected in the lungs of 11 construction workers showed a mean of 1.7 x 10(6) fibres/g dry tissue (range 350,000-3 million) for fibres > 1 micro, almost exclusively due to amphibole fibres. 62% of the claims for compensation were granted but the percentage fell to less than 40% when claims were submitted by a relative, after the death of the subject. The prevalence of mesothelioma occurring among construction workers is high and is associated with asbestos exposure; the risk is underestimated by the subjects and their relatives. All mesotheliomas occurring among
International Nuclear Information System (INIS)
Hagiwara, Hiroki; Iwatsuki, Teruki; Hasegawa, Takuma; Nakata, Kotaro; Tomioka, Yuichi
2015-01-01
This study evaluates a method to estimate shallow groundwater intrusion in and around a large underground research facility (Mizunami Underground Research Laboratory-MIU). Water chemistry, stable isotopes (δD and δ 18 O), tritium ( 3 H), chlorofluorocarbons (CFCs) and sulfur hexafluoride (SF 6 ) in groundwater were monitored around the facility (from 20 m down to a depth of 500 m), for a period of 5 years. The results show that shallow groundwater inflows into deeper groundwater at depths of between 200–400 m. In addition, the content of shallow groundwater estimated using 3 H and CFC-12 concentrations is up to a maximum of about 50%. This is interpreted as the impact on the groundwater environment caused by construction and operation of a large facility over several years. The concomitant use of 3 H and CFCs is an effective method to determine the extent of shallow groundwater inflow caused by construction of an underground facility. (author)
High Sensitivity TSS Prediction: Estimates of Locations Where TSS Cannot Occur
Schaefer, Ulf; Kodzius, Rimantas; Kai, Chikatoshi; Kawai, Jun; Carninci, Piero; Hayashizaki, Yoshihide; Bajic, Vladimir B.
2013-01-01
from mouse and human genomes, we developed a methodology that allows us, by performing computational TSS prediction with very high sensitivity, to annotate, with a high accuracy in a strand specific manner, locations of mammalian genomes that are highly
Development and Validation of a Prediction Model to Estimate Individual Risk of Pancreatic Cancer.
Yu, Ami; Woo, Sang Myung; Joo, Jungnam; Yang, Hye-Ryung; Lee, Woo Jin; Park, Sang-Jae; Nam, Byung-Ho
2016-01-01
There is no reliable screening tool to identify people with high risk of developing pancreatic cancer even though pancreatic cancer represents the fifth-leading cause of cancer-related death in Korea. The goal of this study was to develop an individualized risk prediction model that can be used to screen for asymptomatic pancreatic cancer in Korean men and women. Gender-specific risk prediction models for pancreatic cancer were developed using the Cox proportional hazards model based on an 8-year follow-up of a cohort study of 1,289,933 men and 557,701 women in Korea who had biennial examinations in 1996-1997. The performance of the models was evaluated with respect to their discrimination and calibration ability based on the C-statistic and Hosmer-Lemeshow type χ2 statistic. A total of 1,634 (0.13%) men and 561 (0.10%) women were newly diagnosed with pancreatic cancer. Age, height, BMI, fasting glucose, urine glucose, smoking, and age at smoking initiation were included in the risk prediction model for men. Height, BMI, fasting glucose, urine glucose, smoking, and drinking habit were included in the risk prediction model for women. Smoking was the most significant risk factor for developing pancreatic cancer in both men and women. The risk prediction model exhibited good discrimination and calibration ability, and in external validation it had excellent prediction ability. Gender-specific risk prediction models for pancreatic cancer were developed and validated for the first time. The prediction models will be a useful tool for detecting high-risk individuals who may benefit from increased surveillance for pancreatic cancer.
On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raul
2016-01-01
heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity
Directory of Open Access Journals (Sweden)
Marjan Čeh
2018-05-01
Full Text Available The goal of this study is to analyse the predictive performance of the random forest machine learning technique in comparison to commonly used hedonic models based on multiple regression for the prediction of apartment prices. A data set that includes 7407 records of apartment transactions referring to real estate sales from 2008–2013 in the city of Ljubljana, the capital of Slovenia, was used in order to test and compare the predictive performances of both models. Apparent challenges faced during modelling included (1 the non-linear nature of the prediction assignment task; (2 input data being based on transactions occurring over a period of great price changes in Ljubljana whereby a 28% decline was noted in six consecutive testing years; and (3 the complex urban form of the case study area. Available explanatory variables, organised as a Geographic Information Systems (GIS ready dataset, including the structural and age characteristics of the apartments as well as environmental and neighbourhood information were considered in the modelling procedure. All performance measures (R2 values, sales ratios, mean average percentage error (MAPE, coefficient of dispersion (COD revealed significantly better results for predictions obtained by the random forest method, which confirms the prospective of this machine learning technique on apartment price prediction.
Forkmann, T.; Teismann, T.; Stenzel, J.S.; Glaesmer, H.; Beurs, D. de
2018-01-01
Background: Defeat and entrapment have been shown to be of central relevance to the development of different disorders. However, it remains unclear whether they represent two distinct constructs or one overall latent variable. One reason for the unclarity is that traditional factor analytic
Stuiver, Martijn M; Kampshoff, Caroline S; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J M; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M
2017-11-01
To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo 2peak ) and peak power output (W peak ). Cross-sectional study. Multicenter. Cancer survivors (N=283) in 2 randomized controlled exercise trials. Not applicable. Prediction model accuracy was assessed by intraclass correlation coefficients (ICCs) and limits of agreement (LOA). Multiple linear regression was used for model extension. Clinical performance was judged by the percentage of accurate endurance exercise prescriptions. ICCs of SRT-predicted Vo 2peak and W peak with these values as obtained by the cardiopulmonary exercise test were .61 and .73, respectively, using the previously published prediction models. 95% LOA were ±705mL/min with a bias of 190mL/min for Vo 2peak and ±59W with a bias of 5W for W peak . Modest improvements were obtained by adding body weight and sex to the regression equation for the prediction of Vo 2peak (ICC, .73; 95% LOA, ±608mL/min) and by adding age, height, and sex for the prediction of W peak (ICC, .81; 95% LOA, ±48W). Accuracy of endurance exercise prescription improved from 57% accurate prescriptions to 68% accurate prescriptions with the new prediction model for W peak . Predictions of Vo 2peak and W peak based on the SRT are adequate at the group level, but insufficiently accurate in individual patients. The multivariable prediction model for W peak can be used cautiously (eg, supplemented with a Borg score) to aid endurance exercise prescription. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Crocamo, Cristina; Bartoli, Francesco; Montomoli, Cristina; Carrà, Giuseppe
2018-05-25
Binge drinking (BD) among young people has significant public health implications. Thus, there is the need to target users most at risk. We estimated the discriminative accuracy of an innovative model nested in a recently developed e-Health app (Digital-Alcohol RIsk Alertness Notifying Network for Adolescents and young adults [D-ARIANNA]) for BD in young people, examining its performance to predict short-term BD episodes. We consecutively recruited young adults in pubs, discos, or live music events. Participants self-administered the app D-ARIANNA, which incorporates an evidence-based risk estimation model for the dependent variable BD. They were re-evaluated after 2 weeks using a single-item BD behavior as reference. We estimated D-ARIANNA discriminative ability through measures of sensitivity and specificity, and also likelihood ratios. ROC curve analyses were carried out, exploring variability of discriminative ability across subgroups. The analyses included 507 subjects, of whom 18% reported at least 1 BD episode at follow-up. The majority of these had been identified as at high/moderate or high risk (65%) at induction. Higher scores from the D-ARIANNA risk estimation model reflected an increase in the likelihood of BD. Additional risk factors such as high pocket money availability and alcohol expectancies influence the predictive ability of the model. The D-ARIANNA model showed an appreciable, though modest, predictive ability for subsequent BD episodes. Post-hoc model showed slightly better predictive properties. Using up-to-date technology, D-ARIANNA appears an innovative and promising screening tool for BD among young people. Long-term impact remains to be established, and also the role of additional social and environmental factors.
DEFF Research Database (Denmark)
Greve, Sara V; Blicher, Marie K; Kruger, Ruan
2016-01-01
BACKGROUND: Carotid-femoral pulse wave velocity (cfPWV) adds significantly to traditional cardiovascular risk prediction, but is not widely available. Therefore, it would be helpful if cfPWV could be replaced by an estimated carotid-femoral pulse wave velocity (ePWV) using age and mean blood pres...... that these traditional risk scores have underestimated the complicated impact of age and blood pressure on arterial stiffness and cardiovascular risk....
Hippisley-Cox, Julia; Coupland, Carol
2017-01-01
Objective: To develop and externally validate risk prediction equations to estimate absolute and conditional survival in patients with colorectal cancer. \\ud \\ud Design: Cohort study.\\ud \\ud Setting: General practices in England providing data for the QResearch database linked to the national cancer registry.\\ud \\ud Participants: 44 145 patients aged 15-99 with colorectal cancer from 947 practices to derive the equations. The equations were validated in 15 214 patients with colorectal cancer ...
Schneider, Hauke; Huynh, Thien J; Demchuk, Andrew M; Dowlatshahi, Dar; Rodriguez-Luna, David; Silva, Yolanda; Aviv, Richard; Dzialowski, Imanuel
2018-06-01
The intracerebral hemorrhage (ICH) score is the most commonly used grading scale for stratifying functional outcome in patients with acute ICH. We sought to determine whether a combination of the ICH score and the computed tomographic angiography spot sign may improve outcome prediction in the cohort of a prospective multicenter hemorrhage trial. Prospectively collected data from 241 patients from the observational PREDICT study (Prediction of Hematoma Growth and Outcome in Patients With Intracerebral Hemorrhage Using the CT-Angiography Spot Sign) were analyzed. Functional outcome at 3 months was dichotomized using the modified Rankin Scale (0-3 versus 4-6). Performance of (1) the ICH score and (2) the spot sign ICH score-a scoring scale combining ICH score and spot sign number-was tested. Multivariable analysis demonstrated that ICH score (odds ratio, 3.2; 95% confidence interval, 2.2-4.8) and spot sign number (n=1: odds ratio, 2.7; 95% confidence interval, 1.1-7.4; n>1: odds ratio, 3.8; 95% confidence interval, 1.2-17.1) were independently predictive of functional outcome at 3 months with similar odds ratios. Prediction of functional outcome was not significantly different using the spot sign ICH score compared with the ICH score alone (spot sign ICH score area under curve versus ICH score area under curve: P =0.14). In the PREDICT cohort, a prognostic score adding the computed tomographic angiography-based spot sign to the established ICH score did not improve functional outcome prediction compared with the ICH score. © 2018 American Heart Association, Inc.
Fuček, Mirjana; Dika, Živka; Karanović, Sandra; Vuković Brinar, Ivana; Premužić, Vedran; Kos, Jelena; Cvitković, Ante; Mišić, Maja; Samardžić, Josip; Rogić, Dunja; Jelaković, Bojan
2018-02-15
Chronic kidney disease (CKD) is a significant public health problem and it is not possible to precisely predict its progression to terminal renal failure. According to current guidelines, CKD stages are classified based on the estimated glomerular filtration rate (eGFR) and albuminuria. Aims of this study were to determine the reliability of predictive equation in estimation of CKD prevalence in Croatian areas with endemic nephropathy (EN), compare the results with non-endemic areas, and to determine if the prevalence of CKD stages 3-5 was increased in subjects with EN. A total of 1573 inhabitants of the Croatian Posavina rural area from 6 endemic and 3 non-endemic villages were enrolled. Participants were classified according to the modified criteria of the World Health Organization for EN. Estimated GFR was calculated using Chronic Kidney Disease Epidemiology Collaboration equation (CKD-EPI). The results showed a very high CKD prevalence in the Croatian rural area (19%). CKD prevalence was significantly higher in EN then in non EN villages with the lowest eGFR value in diseased subgroup. eGFR correlated significantly with the diagnosis of EN. Kidney function assessment using CKD-EPI predictive equation proved to be a good marker in differentiating the study subgroups, remained as one of the diagnostic criteria for EN.
Is The Ca + K + Mg/Al Ratio in the Soil Solution a Predictive Tool for Estimating Forest Damage?
International Nuclear Information System (INIS)
Goeransson, A.; Eldhuset, T. D.
2001-01-01
The ratio between (Ca +K +Mg) and Al in nutrient solution has been suggested as a predictive tool for estimating tree growth disturbance. However, the ratio is unspecific in the sense that it is based on several elements which are all essential for plant growth;each of these may be growth-limiting. Furthermore,aluminium retards growth at higher concentrations. Itis therefore difficult to give causal and objective biological explanations for possible growth disturbances. The importance of the proportion of base-cations to N, at a fixed base-cation/Al ratio, is evaluated with regard to growth of Picea abies.The uptake of elements was found to be selective; nutrients were taken up while most Al remained in solution. Biomass partitioning to the roots increased after aluminium addition with low proportions of basecations to nitrogen. We conclude that the low growthrates depend on nutrient limitation in these treatments. Low growth rates in the high proportion experiments may be explained by high internal Alconcentrations. The results strongly suggest that growth rate is not correlated with the ratio in the rooting medium and question the validity of using ratios as predictive tools for estimating forest damage. We suggest that growth limitation of Picea abies in the field may depend on low proportions of base cations to nitrate. It is therefore important to know the nutritional status of the plant material in relation to the growth potential and environmental limitation to be able to predict and estimate forest damage
Aebi, Marcel; Plattner, Belinda; Metzke, Christa Winkler; Bessler, Cornelia; Steinhausen, Hans-Christoph
2013-01-01
Background: Different dimensions of oppositional defiant disorder (ODD) have been found as valid predictors of further mental health problems and antisocial behaviors in youth. The present study aimed at testing the construct, concurrent, and predictive validity of ODD dimensions derived from parent- and self-report measures. Method: Confirmatory…
Standage, Martyn; Duda, Joan L.; Ntoumanis, Nikos
2003-01-01
Examines a study of student motivation in physical education that incorporated constructs from achievement goal and self-determination theories. Self-determined motivation was found to positively predict, whereas amotivation was a negative predictor of leisure-time physical activity intentions. (Contains 86 references and 3 tables.) (GCP)
Malloch, Douglas C.; Michael, William B.
1981-01-01
This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…
Directory of Open Access Journals (Sweden)
Yin Hua
2015-04-01
Full Text Available Estimation of state of charge (SOC is of great importance for lithium-ion (Li-ion batteries used in electric vehicles. This paper presents a state of charge estimation method using nonlinear predictive filter (NPF and evaluates the proposed method on the lithium-ion batteries with different chemistries. Contrary to most conventional filters which usually assume a zero mean white Gaussian process noise, the advantage of NPF is that the process noise in NPF is treated as an unknown model error and determined as a part of the solution without any prior assumption, and it can take any statistical distribution form, which improves the estimation accuracy. In consideration of the model accuracy and computational complexity, a first-order equivalent circuit model is applied to characterize the battery behavior. The experimental test is conducted on the LiCoO2 and LiFePO4 battery cells to validate the proposed method. The results show that the NPF method is able to accurately estimate the battery SOC and has good robust performance to the different initial states for both cells. Furthermore, the comparison study between NPF and well-established extended Kalman filter for battery SOC estimation indicates that the proposed NPF method has better estimation accuracy and converges faster.
Wang, Jianming; Chen, Junran; Hu, Yunfeng; Hu, Hanyan; Liu, Guohua; Yan, Ruixiang
2017-10-01
For prediction of the shelf life of the mushroom Agaricus bisporus, the growth curve of the main spoilage microorganisms was studied under isothermal conditions at 2 to 22°C with a modified Gompertz model. The effect of temperature on the growth parameters for the main spoilage microorganisms was quantified and modeled using the square root model. Pseudomonas spp. were the main microorganisms causing A. bisporus decay, and the modified Gompertz model was useful for modelling the growth curve of Pseudomonas spp. All the bias factors values of the model were close to 1. By combining the modified Gompertz model with the square root model, a prediction model to estimate the shelf life of A. bisporus as a function of storage temperature was developed. The model was validated for A. bisporus stored at 6, 12, and 18°C, and adequate agreement was found between the experimental and predicted data.
Jannati, Ali; McDonald, John J; Di Lollo, Vincent
2015-06-01
The capacity of visual short-term memory (VSTM) is commonly estimated by K scores obtained with a change-detection task. Contrary to common belief, K may be influenced not only by capacity but also by the rate at which stimuli are encoded into VSTM. Experiment 1 showed that, contrary to earlier conclusions, estimates of VSTM capacity obtained with a change-detection task are constrained by temporal limitations. In Experiment 2, we used change-detection and backward-masking tasks to obtain separate within-subject estimates of K and of rate of encoding, respectively. A median split based on rate of encoding revealed significantly higher K estimates for fast encoders. Moreover, a significant correlation was found between K and the estimated rate of encoding. The present findings raise the prospect that the reported relationships between K and such cognitive concepts as fluid intelligence may be mediated not only by VSTM capacity but also by rate of encoding. (c) 2015 APA, all rights reserved).
B. K. Punia; Priyanka Yadav
2015-01-01
The piece of writing investigates the relationship between employees’ emotional and Spiritual intelligence. A conversation of spirituality and emotions within the workplace can be an unthinkable topic. However, emotional intelligence and spiritual intelligence are, at present, more widely acknowledged. Drawing a research connected with these construct we suggest that emotional intelligence within the employees in organisations may provide employees with a medium to better understand and mix s...
Aircraft ground damage and the use of predictive models to estimate costs
Kromphardt, Benjamin D.
Aircraft are frequently involved in ground damage incidents, and repair costs are often accepted as part of doing business. The Flight Safety Foundation (FSF) estimates ground damage to cost operators $5-10 billion annually. Incident reports, documents from manufacturers or regulatory agencies, and other resources were examined to better understand the problem of ground damage in aviation. Major contributing factors were explained, and two versions of a computer-based model were developed to project costs and show what is possible. One objective was to determine if the models could match the FSF's estimate. Another objective was to better understand cost savings that could be realized by efforts to further mitigate the occurrence of ground incidents. Model effectiveness was limited by access to official data, and assumptions were used if data was not available. However, the models were determined to sufficiently estimate the costs of ground incidents.
Rowlinson, Steve; Jia, Yunyan Andrea
2014-04-01
Existing heat stress risk management guidelines recommended by international standards are not practical for the construction industry which needs site supervision staff to make instant managerial decisions to mitigate heat risks. The ability of the predicted heat strain (PHS) model [ISO 7933 (2004). Ergonomics of the thermal environment analytical determination and interpretation of heat stress using calculation of the predicted heat strain. Geneva: International Standard Organisation] to predict maximum allowable exposure time (D lim) has now enabled development of localized, action-triggering and threshold-based guidelines for implementation by lay frontline staff on construction sites. This article presents a protocol for development of two heat stress management tools by applying the PHS model to its full potential. One of the tools is developed to facilitate managerial decisions on an optimized work-rest regimen for paced work. The other tool is developed to enable workers' self-regulation during self-paced work.
DEFF Research Database (Denmark)
Krag, Kristian
The composition of bovine milk fat, used for human consumption, is far from the recommendations for human fat nutrition. The aim of this PhD was to describe the variance components and prediction probabilities of individual fatty acids (FA) in bovine milk, and to evaluate the possibilities...
Yield loss prediction models based on early estimation of weed pressure
DEFF Research Database (Denmark)
Asif, Ali; Streibig, Jens Carl; Andreasen, Christian
2013-01-01
thresholds are more relevant for site-specific weed management, because weeds are unevenly distributed in fields. Precision of prediction of yield loss is influenced by various factors such as locations, yield potential at the site, variation in competitive ability of mix stands of weed species and emergence...
van der Spoel, Sjoerd; Amrit, Chintan Amrit; van Hillegersberg, Jos
2017-01-01
Distribution centres (DCs) are the hubs connecting transport streams in the supply chain. The synchronisation of coming and going cargo at a DC requires reliable arrival times. To achieve this, a reliable method to predict arrival times is needed. A literature review was performed to find the
Kibble, Jonathan D.; Johnson, Teresa
2011-01-01
The purpose of this study was to evaluate whether multiple-choice item difficulty could be predicted either by a subjective judgment by the question author or by applying a learning taxonomy to the items. Eight physiology faculty members teaching an upper-level undergraduate human physiology course consented to participate in the study. The…
Improved therapy-success prediction with GSS estimated from clinical HIV-1 sequences.
Pironti, Alejandro; Pfeifer, Nico; Kaiser, Rolf; Walter, Hauke; Lengauer, Thomas
2014-01-01
Rules-based HIV-1 drug-resistance interpretation (DRI) systems disregard many amino-acid positions of the drug's target protein. The aims of this study are (1) the development of a drug-resistance interpretation system that is based on HIV-1 sequences from clinical practice rather than hard-to-get phenotypes, and (2) the assessment of the benefit of taking all available amino-acid positions into account for DRI. A dataset containing 34,934 therapy-naïve and 30,520 drug-exposed HIV-1 pol sequences with treatment history was extracted from the EuResist database and the Los Alamos National Laboratory database. 2,550 therapy-change-episode baseline sequences (TCEB) were assigned to test set A. Test set B contains 1,084 TCEB from the HIVdb TCE repository. Sequences from patients absent in the test sets were used to train three linear support vector machines to produce scores that predict drug exposure pertaining to each of 20 antiretrovirals: the first one uses the full amino-acid sequences (DEfull), the second one only considers IAS drug-resistance positions (DEonlyIAS), and the third one disregards IAS drug-resistance positions (DEnoIAS). For performance comparison, test sets A and B were evaluated with DEfull, DEnoIAS, DEonlyIAS, geno2pheno[resistance], HIVdb, ANRS, HIV-GRADE, and REGA. Clinically-validated cut-offs were used to convert the continuous output of the first four methods into susceptible-intermediate-resistant (SIR) predictions. With each method, a genetic susceptibility score (GSS) was calculated for each therapy episode in each test set by converting the SIR prediction for its compounds to integer: S=2, I=1, and R=0. The GSS were used to predict therapy success as defined by the EuResist standard datum definition. Statistical significance was assessed using a Wilcoxon signed-rank test. A comparison of the therapy-success prediction performances among the different interpretation systems for test set A can be found in Table 1, while those for test set
The Masculinity of Money: Automatic Stereotypes Predict Gender Differences in Estimated Salaries
Williams, Melissa J.; Paluck, Elizabeth Levy; Spencer-Rodgers, Julie
2010-01-01
We present the first empirical investigation of why men are assumed to earn higher salaries than women (the "salary estimation effect"). Although this phenomenon is typically attributed to conscious consideration of the national wage gap (i.e., real inequities in salary), we hypothesize instead that it reflects differential, automatic economic…
Akkermans, Simen; Logist, Filip; Van Impe, Jan F
2018-04-01
When building models to describe the effect of environmental conditions on the microbial growth rate, parameter estimations can be performed either with a one-step method, i.e., directly on the cell density measurements, or in a two-step method, i.e., via the estimated growth rates. The two-step method is often preferred due to its simplicity. The current research demonstrates that the two-step method is, however, only valid if the correct data transformation is applied and a strict experimental protocol is followed for all experiments. Based on a simulation study and a mathematical derivation, it was demonstrated that the logarithm of the growth rate should be used as a variance stabilizing transformation. Moreover, the one-step method leads to a more accurate estimation of the model parameters and a better approximation of the confidence intervals on the estimated parameters. Therefore, the one-step method is preferred and the two-step method should be avoided. Copyright © 2017. Published by Elsevier Ltd.
Parameter Estimation and Prediction of a Nonlinear Storage Model: an algebraic approach
Doeswijk, T.G.; Keesman, K.J.
2005-01-01
Generally, parameters that are nonlinear in system models are estimated by nonlinear least-squares optimization algorithms. In this paper, if a nonlinear discrete-time model with a polynomial quotient structure in input, output, and parameters, a method is proposed to re-parameterize the model such
House thermal model parameter estimation method for Model Predictive Control applications
van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria
In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results
Zastrau, David
2017-01-01
Wind drives in combination with weather routing can lower the fuel consumption of cargo ships significantly. For this reason, the author describes a mathematical method based on quantile regression for a probabilistic estimate of the wind propulsion force on a ship route.
Linear regressive model structures for estimation and prediction of compartmental diffusive systems
Vries, D; Keesman, K.J.; Zwart, Heiko J.
In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state space
Linear regressive model structures for estimation and prediction of compartmental diffusive systems
Vries, D.; Keesman, K.J.; Zwart, H.
2006-01-01
Abstract In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state
Yang, Lu; Linder, Mark W
2013-01-01
In this chapter, we use calculation of estimated warfarin maintenance dosage as an example to illustrate how to develop a multiple linear regression model to quantify the relationship between several independent variables (e.g., patients' genotype information) and a dependent variable (e.g., measureable clinical outcome).
Pierie, J P; De Graaf, P W; Poen, H; Van der Tweel, I; Obertop, H
1994-11-01
To assess the value of relative blood perfusion of the gastric tube in prediction of impaired healing of cervical oesophagogastrostomies. Prospective study. University hospital, The Netherlands. Thirty patients undergoing transhiatal oesophagectomy and partial gastrectomy for cancer of the oesophagus or oesophagogastric junction, with gastric tube reconstruction and cervical oesophagogastrostomy. Operative measurement of gastric blood perfusion at four sites by laser Doppler flowmetry and perfusion of the same sites after construction of the gastric tube expressed as a percentage of preconstruction values. The relative perfusion at the most proximal site of the gastric tube was significantly lower than at the more distal sites (p = 0.001). Nine of 18 patients (50%) in whom the perfusion of the proximal gastric tube was less than 70% of preconstruction values developed an anastomotic stricture, compared with only 1 of 12 patients (8%) with a relative perfusion of 70% or more (p = 0.024). A reduction in perfusion of the gastric tube did not predict leakage. Impaired anastomotic healing is unlikely if relative perfusion is 70% or more of preconstruction values. Perfusion of less than 70% partly predicts the occurrence of anastomotic stricture, but leakage cannot be predicted. Factors other than blood perfusion may have a role in the process of anastomotic healing.
International Nuclear Information System (INIS)
Pei, Lei; Zhu, Chunbo; Wang, Tiansi; Lu, Rengui; Chan, C.C.
2014-01-01
The goal of this study is to realize real-time predictions of the peak power/state of power (SOP) for lithium-ion batteries in electric vehicles (EVs). To allow the proposed method to be applicable to different temperature and aging conditions, a training-free battery parameter/state estimator is presented based on an equivalent circuit model using a dual extended Kalman filter (DEKF). In this estimator, the model parameters are no longer taken as functions of factors such as SOC (state of charge), temperature, and aging; instead, all parameters will be directly estimated under the present conditions, and the impact of the temperature and aging on the battery model will be included in the parameter identification results. Then, the peak power/SOP will be calculated using the estimated results under the given limits. As an improvement to the calculation method, a combined limit of current and voltage is proposed to obtain results that are more reasonable. Additionally, novel verification experiments are designed to provide the true values of the cells' peak power under various operating conditions. The proposed methods are implemented in experiments with LiFePO 4 /graphite cells. The validating results demonstrate that the proposed methods have good accuracy and high adaptability. - Highlights: • A real-time peak power/SOP prediction method for lithium-ion batteries is proposed. • A training-free method based on DEKF is presented for parameter identification. • The proposed method can be applied to different temperature and aging conditions. • The calculation of peak power under the current and voltage limits is improved. • Validation experiments are designed to verify the accuracy of prediction results
Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo
2017-09-01
Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.
Directory of Open Access Journals (Sweden)
Mini Joseph
2017-01-01
Full Text Available Background: The accuracy of existing predictive equations to determine the resting energy expenditure (REE of professional weightlifters remains scarcely studied. Our study aimed at assessing the REE of male Asian Indian weightlifters with indirect calorimetry and to compare the measured REE (mREE with published equations. A new equation using potential anthropometric variables to predict REE was also evaluated. Materials and Methods: REE was measured on 30 male professional weightlifters aged between 17 and 28 years using indirect calorimetry and compared with the eight formulas predicted by Harris–Benedicts, Mifflin-St. Jeor, FAO/WHO/UNU, ICMR, Cunninghams, Owen, Katch-McArdle, and Nelson. Pearson correlation coefficient, intraclass correlation coefficient, and multiple linear regression analysis were carried out to study the agreement between the different methods, association with anthropometric variables, and to formulate a new prediction equation for this population. Results: Pearson correlation coefficients between mREE and the anthropometric variables showed positive significance with suprailiac skinfold thickness, lean body mass (LBM, waist circumference, hip circumference, bone mineral mass, and body mass. All eight predictive equations underestimated the REE of the weightlifters when compared with the mREE. The highest mean difference was 636 kcal/day (Owen, 1986 and the lowest difference was 375 kcal/day (Cunninghams, 1980. Multiple linear regression done stepwise showed that LBM was the only significant determinant of REE in this group of sportspersons. A new equation using LBM as the independent variable for calculating REE was computed. REE for weightlifters = −164.065 + 0.039 (LBM (confidence interval −1122.984, 794.854]. This new equation reduced the mean difference with mREE by 2.36 + 369.15 kcal/day (standard error = 67.40. Conclusion: The significant finding of this study was that all the prediction equations
Maulana, Ridwan; Helms-Lorenz, Michelle
2016-01-01
Observations and student perceptions are recognised as important tools for examining teaching behaviour, but little is known about whether both perspectives share similar construct representations and how both perspectives link with student academic outcomes. The present study compared the construct representation of preservice teachers' teaching…
Xu, Leilei; Qin, Xiaodong; Zhang, Wen; Qiao, Jun; Liu, Zhen; Zhu, Zezhang; Qiu, Yong; Qian, Bang-ping
2015-07-01
A prospective, cross-sectional study. To determine the independent variables associated with lumbar lordosis (LL) and to establish the predictive formula of ideal LL in Chinese population. Several formulas have been established in Caucasians to estimate the ideal LL to be restored for lumbar fusion surgery. However, there is still a lack of knowledge concerning the establishment of such predictive formula in Chinese population. A total of 296 asymptomatic Chinese adults were prospectively recruited. The relationships between LL and variables including pelvic incidence (PI), age, sex, and body mass index were investigated to determine the independent factors that could be used to establish the predictive formula. For the validation of the current formula, other 4 reported predictive formulas were included. The absolute value of the gap between the actual LL and the ideal LL yielded by these formulas was calculated and then compared between the 4 reported formulas and the current one to determine its reliability in predicting the ideal LL. The logistic regression analysis showed that there were significant associations of LL with PI and age (R = 0.508, P < 0.001 for PI; R = 0.088, P = 0.03 for age). The formula was, therefore, established as follows: LL = 0.508 × PI - 0.088 × Age + 28.6. When applying our formula to these subjects, the gap between the predicted ideal LL and the actual LL was averaged 3.9 ± 2.1°, which was significantly lower than that of the other 4 formulas. The calculation formula derived in this study can provide a more accurate prediction of the LL for the Chinese population, which could be used as a tool for decision making to restore the LL in lumbar corrective surgery. 3.
DEFF Research Database (Denmark)
Grandjean, Philippe; Heilmann, Carsten; Weihe, Pal
2017-01-01
Perfluorinated alkylate substances (PFASs) are highly persistent and may cause immunotoxic effects. PFAS-associated attenuated antibody responses to childhood vaccines may be affected by PFAS exposures during infancy, where breastfeeding adds to PFAS exposures. Of 490 members of a Faroese birth...... cohort, 275 and 349 participated in clinical examinations and provided blood samples at ages 18 months and 5 years. PFAS concentrations were measured at birth and at the clinical examinations. Using information on duration of breastfeeding, serum-PFAS concentration profiles during infancy were estimated......, with decreases by up to about 20% for each two-fold higher exposure, while associations for serum concentrations at ages 18 months and 5 years were weaker. Modeling of serum-PFAS concentration showed levels for age 18 months that were similar to those measured. Concentrations estimated for ages 3 and 6 months...
A new lifetime estimation model for a quicker LED reliability prediction
Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.
2014-09-01
LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.
Acar-Tek, Nilüfer; Ağagündüz, Duygu; Çelik, Bülent; Bozbulut, Rukiye
2017-08-01
Accurate estimation of resting energy expenditure (REE) in childrenand adolescents is important to establish estimated energy requirements. The aim of the present study was to measure REE in obese children and adolescents by indirect calorimetry method, compare these values with REE values estimated by equations, and develop the most appropriate equation for this group. One hundred and three obese children and adolescents (57 males, 46 females) between 7 and 17 years (10.6 ± 2.19 years) were recruited for the study. REE measurements of subjects were made with indirect calorimetry (COSMED, FitMatePro, Rome, Italy) and body compositions were analyzed. In females, the percentage of accurate prediction varied from 32.6 (World Health Organization [WHO]) to 43.5 (Molnar and Lazzer). The bias for equations was -0.2% (Kim), 3.7% (Molnar), and 22.6% (Derumeaux-Burel). Kim's (266 kcal/d), Schmelzle's (267 kcal/d), and Henry's equations (268 kcal/d) had the lowest root mean square error (RMSE; respectively 266, 267, 268 kcal/d). The equation that has the highest RMSE values among female subjects was the Derumeaux-Burel equation (394 kcal/d). In males, when the Institute of Medicine (IOM) had the lowest accurate prediction value (12.3%), the highest values were found using Schmelzle's (42.1%), Henry's (43.9%), and Müller's equations (fat-free mass, FFM; 45.6%). When Kim and Müller had the smallest bias (-0.6%, 9.9%), Schmelzle's equation had the smallest RMSE (331 kcal/d). The new specific equation based on FFM was generated as follows: REE = 451.722 + (23.202 * FFM). According to Bland-Altman plots, it has been found out that the new equations are distributed randomly in both males and females. Previously developed predictive equations mostly provided unaccurate and biased estimates of REE. However, the new predictive equations allow clinicians to estimate REE in an obese children and adolescents with sufficient and acceptable accuracy.
China’s primary energy demands in 2020: Predictions from an MPSO–RBF estimation model
International Nuclear Information System (INIS)
Yu Shiwei; Wei Yiming; Wang Ke
2012-01-01
Highlights: ► A Mix-encoding PSO and RBF network-based energy demand forecasting model is proposed. ► The proposed model has simpler structure and smaller estimated errors than other ANN models. ► China’s energy demand could reach 6.25 billion, 4.16 billion, and 5.29 billion tons tce. ► China’s energy efficiency in 2020 will increase by more than 30% compared with 2009. - Abstract: In the present study, a Mix-encoding Particle Swarm Optimization and Radial Basis Function (MPSO–RBF) network-based energy demand forecasting model is proposed and applied to forecast China’s energy consumption until 2020. The energy demand is analyzed for the period from 1980 to 2009 based on GDP, population, proportion of industry in GDP, urbanization rate, and share of coal energy. The results reveal that the proposed MPSO–RBF based model has fewer hidden nodes and smaller estimated errors compared with other ANN-based estimation models. The average annual growth of China’s energy demand will be 6.70%, 2.81%, and 5.08% for the period between 2010 and 2020 in three scenarios and could reach 6.25 billion, 4.16 billion, and 5.29 billion tons coal equivalent in 2020. Regardless of future scenarios, China’s energy efficiency in 2020 will increase by more than 30% compared with 2009.
Hospital costs estimation and prediction as a function of patient and admission characteristics.
Ramiarina, Robert; Almeida, Renan Mvr; Pereira, Wagner Ca
2008-01-01
The present work analyzed the association between hospital costs and patient admission characteristics in a general public hospital in the city of Rio de Janeiro, Brazil. The unit costs method was used to estimate inpatient day costs associated to specific hospital clinics. With this aim, three "cost centers" were defined in order to group direct and indirect expenses pertaining to the clinics. After the costs were estimated, a standard linear regression model was developed for correlating cost units and their putative predictors (the patients gender and age, the admission type (urgency/elective), ICU admission (yes/no), blood transfusion (yes/no), the admission outcome (death/no death), the complexity of the medical procedures performed, and a risk-adjustment index). Data were collected for 3100 patients, January 2001-January 2003. Average inpatient costs across clinics ranged from (US$) 1135 [Orthopedics] to 3101 [Cardiology]. Costs increased according to increases in the risk-adjustment index in all clinics, and the index was statistically significant in all clinics except Urology, General surgery, and Clinical medicine. The occupation rate was inversely correlated to costs, and age had no association with costs. The (adjusted) per cent of explained variance varied between 36.3% [Clinical medicine] and 55.1% [Thoracic surgery clinic]. The estimates are an important step towards the standardization of hospital costs calculation, especially for countries that lack formal hospital accounting systems.
Estimated maximal and current brain volume predict cognitive ability in old age
Royle, Natalie A.; Booth, Tom; Valdés Hernández, Maria C.; Penke, Lars; Murray, Catherine; Gow, Alan J.; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E.; Deary, Ian J.; Wardlaw, Joanna M.
2013-01-01
Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging. PMID:23850342
Nøst, Therese Haugdahl; Breivik, Knut; Wania, Frank; Rylander, Charlotta; Odland, Jon Øyvind; Sandanger, Torkjel Manning
2015-01-01
, Rylander C, Odland JØ, Sandanger TM. 2016. Estimating time-varying PCB exposures using person-specific predictions to supplement measured values: a comparison of observed and predicted values in two cohorts of Norwegian women. Environ Health Perspect 124:299–305; http://dx.doi.org/10.1289/ehp.1409191 PMID:26186800
Shiokai, Sachiko; Kitashiba, Hiroyasu; Nishio, Takeshi
2010-08-01
Although the dot-blot-SNP technique is a simple cost-saving technique suitable for genotyping of many plant individuals, optimization of hybridization and washing conditions for each SNP marker requires much time and labor. For prediction of the optimum hybridization conditions for each probe, we compared T (m) values estimated from nucleotide sequences using the DINAMelt web server, measured T (m) values, and hybridization conditions yielding allele-specific signals. The estimated T (m) values were comparable to the measured T (m) values with small differences of less than 3 degrees C for most of the probes. There were differences of approximately 14 degrees C between the specific signal detection conditions and estimated T (m) values. Change of one level of SSC concentrations of 0.1, 0.2, 0.5, and 1.0x SSC corresponded to a difference of approximately 5 degrees C in optimum signal detection temperature. Increasing the sensitivity of signal detection by shortening the exposure time to X-ray film changed the optimum hybridization condition for specific signal detection. Addition of competitive oligonucleotides to the hybridization mixture increased the suitable hybridization conditions by 1.8. Based on these results, optimum hybridization conditions for newly produced dot-blot-SNP markers will become predictable.
Directory of Open Access Journals (Sweden)
Hyun-Cheol Kim
2016-01-01
Full Text Available The northern East China Sea (a.k.a., “The South Sea” is a dynamic zone that exerts a variety of effects on the marine ecosystem due to Three-Gorges Dam construction. As the northern East China Sea region is vulnerable to climate forcing and anthropogenic impacts, it is important to investigate how the remineralization rate in the northern East China Sea has changed in response to such external forcing. We used an historical hydrographic dataset from August 1997 to obtain a baseline for future comparison. We estimate the amount of remineralized phosphate by decomposing the physical mixing and biogeochemical process effect using water column measurements (temperature, salinity, and phosphate. The estimated remineralized phosphate column inventory ranged from 0.8 to 42.4 mmol P m-2 (mean value of 15.2 ± 12.0 mmol P m-2. Our results suggest that the Tsushima Warm Current was a strong contributor to primary production during the summer of 1997 in the study area. The estimated summer (June - August remineralization rate in the region before Three-Gorges Dam construction was 18 ± 14 mmol C m-2 d-1.
Residual Stress Estimation and Fatigue Life Prediction of an Autofrettaged Pressure Vessel
Energy Technology Data Exchange (ETDEWEB)
Song, Kyung Jin; Kim, Eun Kyum; Koh, Seung Kee [Kunsan Nat’l Univ., Kunsan (Korea, Republic of)
2017-09-15
Fatigue failure of an autofrettaged pressure vessel with a groove at the outside surface occurs owing to the fatigue crack initiation and propagation at the groove root. In order to predict the fatigue life of the autofrettaged pressure vessel, residual stresses in the autofrettaged pressure vessel were evaluated using the finite element method, and the fatigue properties of the pressure vessel steel were obtained from the fatigue tests. Fatigue life of a pressure vessel obtained through summation of the crack initiation and propagation lives was calculated to be 2,598 cycles for an 80% autofrettaged pressure vessel subjected to a pulsating internal pressure of 424 MPa.
Prasad, A.; Howells, A. E.; Shock, E.
2017-12-01
The biological fate of any metal depends on its chemical form in the environment. Arsenic for example, is extremely toxic in the form of inorganic As+3 but completely benign in the organic form of arsenobetaine. Thus, given an exhaustive set of reactions and their equilibrium constants (logK), the bioavailability of any metal can be obtained for blood plasma, hydrothermal fluids or any system of interest. While many data exist for metal-inorganic ligands, logK data covering the temperature range of life for metal-organic complexes are sparse. Hence, we decided to estimate metal-organic logK values from correlations with the commonly available values of ligand pKa. Metal ion specific correlations were made with ligands classified according to their electron donor atoms, denticity and other chemical factors. While this approach has been employed before (Carbonaro et al. 2007, GCA 71, 3958-3968), new correlations were developed that provide estimates even when no metal-organic logK is available. In addition, we have used the same methods to make estimates of metal-organic entropy of association (ΔaS), which can provide logK for any temperature of biological relevance. Our current correlations employ logK and ΔaS data from 30 metal ions (like the biologically relevant Fe+3 & Zn+2) and 74 ligands (like formate and ethylenediamine), which can be expanded to estimate the metal-ligand reaction properties for these 30 metal ions with a possibly limitless number of ligands that may belong to our categories of ligands. With the help of such data, copper speciation was obtained for a defined growth medium for methanotrophs employed by Morton et al. (2000, AEM 66, 1730-1733) that agrees with experimental measurements showing that the free metal ion may not be the bioavailable form in all conditions. These results encourage us to keep filling the gaps in metal-organic logK data and continue finding relationships between biological responses (like metal-accumulation ratios
Model-Based Load Estimation for Predictive Condition Monitoring of Wind Turbines
DEFF Research Database (Denmark)
Perisic, Nevena; Pederen, Bo Juul; Grunnet, Jacob Deleuran
signal is performed online, and a Load Indicator Signal (LIS) is formulated as a ratio between current estimated accumulated fatigue loads and its expected value based only on a priori knowledge (WTG dynamics and wind climate). LOT initialisation is based on a priori knowledge and can be obtained using...... programme for pre-maintenance actions. The performance of LOT is demonstrated by applying it to one of the most critical WTG components, the gearbox. Model-based load CMS for gearbox requires only standard WTG SCADA data. Direct measuring of gearbox fatigue loads requires high cost and low reliability...... measurement equipment. Thus, LOT can significantly reduce the price of load monitoring....
Directory of Open Access Journals (Sweden)
Antonio Eduardo Bezerra Cabral
2012-12-01
Full Text Available The aim of this paper is to verify the influence of composition variability of recycled aggregates (RA of construction and demolition wastes (CDW on the performance of concretes. Performance was evaluated building mathematical models for compressive strength, modulus of elasticity and drying shrinkage. To obtain such models, an experimental program comprising 50 concrete mixtures was carried out. Specimens were casted, tested and results for compressive strength, modulus of elasticity and drying shrinkage were statistically analyzed. Models inputs are CDW composition observed at seven Brazilian cities. Results confirm that using RA from CDW for concrete building is quite feasible, independently of its composition, once compressive strength and modulus of elasticity still reached considerable values. We concluded the variability presented by recycled aggregates of CDW does not compromise their use for concrete building. However, this information must be used with caution, and experimental tests should always be performed to certify concrete properties.
Directory of Open Access Journals (Sweden)
CHERNYSHEV D. О.
2017-03-01
Full Text Available Summary. The article is devoted to the search of advanced analytical tools and methodical-algorithmic techniques of organizational and technological and stochastic evaluation, risks and threats overcoming during the implementation of biosphere construction projects. The application expediency of theory and methods of wavelet analysis in the study of non-stationary stochastic oscillations of complex spatial structures is substantiated due to the need for more accurate prediction of their dynamic behavior and identification of the structures characteristics in the frequency-time space.
DEFF Research Database (Denmark)
Sinding, Marianne Munk; Peters, David Alberg; Frøkjær, Jens Brøndum
(MRI) variable T2* reflects the placental oxygenation and thereby placental function. Therefore, we aimed to evaluate the performance of placental T2* in the prediction of low birth weight using the uterine artery (UtA) pulsatility index (PI) as gold standard. Methods: The study population......CONTROL ID: 2516296 ABSTRACT FINAL ID: P22.05 TITLE: Prediction of low birth weight: the placental T2* estimated by MRI versus the uterine artery pulsatility index AUTHORS (FIRST NAME, LAST NAME): Marianne Sinding1, David Peters2, Jens B. Frøkjær3, 4, Ole B. Christiansen1, 4, Astrid Petersen5...... had an EFW T2* was measured by MRI at 1.5T. A gradient recalled echo MRI sequence with readout at 16 echo times was used, and the placental T2* value was obtained by fitting the signal intensity as a function of the echo times...
Ashwin, T. R.; Barai, A.; Uddin, K.; Somerville, L.; McGordon, A.; Marco, J.
2018-05-01
Ageing prediction is often complicated due to the interdependency of ageing mechanisms. Research has highlighted that storage ageing is not linear with time. Capacity loss due to storing the battery at constant temperature can shed more light on parametrising the properties of the Solid Electrolyte Interphase (SEI); the identification of which, using an electrochemical model, is systematically addressed in this work. A new methodology is proposed where any one of the available storage ageing datasets can be used to find the property of the SEI layer. A sensitivity study is performed with different molecular mass and densities which are key parameters in modelling the thickness of the SEI deposit. The conductivity is adjusted to fine tune the rate of capacity fade to match experimental results. A correlation is fitted for the side reaction variation to capture the storage ageing in the 0%-100% SoC range. The methodology presented in this paper can be used to predict the unknown properties of the SEI layer which is difficult to measure experimentally. The simulation and experimental results show that the storage ageing model shows good accuracy for the cases at 50% and 90% and an acceptable agreement at 20% SoC.
A dual systems account of visual perception: Predicting candy consumption from distance estimates.
Krpan, Dario; Schnall, Simone
2017-04-01
A substantial amount of evidence shows that visual perception is influenced by forces that control human actions, ranging from motivation to physiological potential. However, studies have not yet provided convincing evidence that perception itself is directly involved in everyday behaviors such as eating. We suggest that this issue can be resolved by employing the dual systems account of human behavior. We tested the link between perceived distance to candies and their consumption for participants who were tired or depleted (impulsive system), versus those who were not (reflective system). Perception predicted eating only when participants were tired (Experiment 1) or depleted (Experiments 2 and 3). In contrast, a rational determinant of behavior-eating restraint towards candies-predicted eating for non-depleted individuals (Experiment 2). Finally, Experiment 3 established that perceived distance was correlated with participants' self-reported motivation to consume candies. Overall, these findings suggest that the dynamics between perception and behavior depend on the interplay of the two behavioral systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Graziani, Rebecca; Guindani, Michele; Thall, Peter F.
2015-01-01
Summary The effect of a targeted agent on a cancer patient's clinical outcome putatively is mediated through the agent's effect on one or more early biological events. This is motivated by pre-clinical experiments with cells or animals that identify such events, represented by binary or quantitative biomarkers. When evaluating targeted agents in humans, central questions are whether the distribution of a targeted biomarker changes following treatment, the nature and magnitude of this change, and whether it is associated with clinical outcome. Major difficulties in estimating these effects are that a biomarker's distribution may be complex, vary substantially between patients, and have complicated relationships with clinical outcomes. We present a probabilistically coherent framework for modeling and estimation in this setting, including a hierarchical Bayesian nonparametric mixture model for biomarkers that we use to define a functional profile of pre-versus-post treatment biomarker distribution change. The functional is similar to the receiver operating characteristic used in diagnostic testing. The hierarchical model yields clusters of individual patient biomarker profile functionals, and we use the profile as a covariate in a regression model for clinical outcome. The methodology is illustrated by analysis of a dataset from a clinical trial in prostate cancer using imatinib to target platelet-derived growth factor, with the clinical aim to improve progression-free survival time. PMID:25319212
Directory of Open Access Journals (Sweden)
Gunter eSpöck
2015-05-01
Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.
Adeyekun, A A; Orji, M O
2014-04-01
To compare the predictive accuracy of foetal trans-cerebellar diameter (TCD) with those of other biometric parameters in the estimation of gestational age (GA). A cross-sectional study. The University of Benin Teaching Hospital, Nigeria. Four hundred and fifty healthy singleton pregnant women, between 14-42 weeks gestation. Trans-cerebellar diameter (TCD), biparietal diameter (BPD), femur length (FL), abdominal circumference (AC) values across the gestational age range studied. Correlation and predictive values of TCD compared to those of other biometric parameters. The range of values for TCD was 11.9 - 59.7mm (mean = 34.2 ± 14.1mm). TCD correlated more significantly with menstrual age compared with other biometric parameters (r = 0.984, p = 0.000). TCD had a higher predictive accuracy of 96.9% ± 12 days), BPD (93.8% ± 14.1 days). AC (92.7% ± 15.3 days). TCD has a stronger predictive accuracy for gestational age compared to other routinely used foetal biometric parameters among Nigerian Africans.
Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.
2017-12-01
Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability
International Nuclear Information System (INIS)
Kim, J. Y.; Shin, C. H.; Kim, J. K.; Lee, J. K.; Park, Y. J.
2003-01-01
The variation transitions of the inventories for the liquid radwaste system and the radioactive gas have being released in containment, and their predictive values according to the operation histories of Yonggwang(YGN) 3 and 4 were analyzed by linear regression analysis methodology. The results show that the variation transitions of the inventories for those systems are linearly increasing according to the operation histories but the inventories released to the environment are considerably lower than the recommended values based on the FSAR suggestions. It is considered that some conservation were presented in the estimation methodology in preparing stage of FSAR
Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng
2017-12-01
Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.
Directory of Open Access Journals (Sweden)
Kevin V Lemley
Full Text Available Most predictive models of kidney disease progression have not incorporated structural data. If structural variables have been used in models, they have generally been only semi-quantitative.We examined the predictive utility of quantitative structural parameters measured on the digital images of baseline kidney biopsies from the NEPTUNE study of primary proteinuric glomerulopathies. These variables were included in longitudinal statistical models predicting the change in estimated glomerular filtration rate (eGFR over up to 55 months of follow-up.The participants were fifty-six pediatric and adult subjects from the NEPTUNE longitudinal cohort study who had measurements made on their digital biopsy images; 25% were African-American, 70% were male and 39% were children; 25 had focal segmental glomerular sclerosis, 19 had minimal change disease, and 12 had membranous nephropathy. We considered four different sets of candidate predictors, each including four quantitative structural variables (for example, mean glomerular tuft area, cortical density of patent glomeruli and two of the principal components from the correlation matrix of six fractional cortical areas-interstitium, atrophic tubule, intact tubule, blood vessel, sclerotic glomerulus, and patent glomerulus along with 13 potentially confounding demographic and clinical variables (such as race, age, diagnosis, and baseline eGFR, quantitative proteinuria and BMI. We used longitudinal linear models based on these 17 variables to predict the change in eGFR over up to 55 months. All 4 models had a leave-one-out cross-validated R2 of about 62%.Several combinations of quantitative structural variables were significantly and strongly associated with changes in eGFR. The structural variables were generally stronger than any of the confounding variables, other than baseline eGFR. Our findings suggest that quantitative assessment of diagnostic renal biopsies may play a role in estimating the baseline
Directory of Open Access Journals (Sweden)
Silvia Barbetta
2016-10-01
Full Text Available This work presents the application of the multi-temporal approach of the Model Conditional Processor (MCP-MT for predictive uncertainty (PU estimation in the Godavari River basin, India. MCP-MT is developed for making probabilistic Bayesian decision. It is the most appropriate approach if the uncertainty of future outcomes is to be considered. It yields the best predictive density of future events and allows determining the probability that a critical warning threshold may be exceeded within a given forecast time. In Bayesian decision-making, the predictive density represents the best available knowledge on a future event to address a rational decision-making process. MCP-MT has already been tested for case studies selected in Italian river basins, showing evidence of improvement of the effectiveness of operative real-time flood forecasting systems. The application of MCP-MT for two river reaches selected in the Godavari River basin, India, is here presented and discussed by considering the stage forecasts provided by a deterministic model, STAFOM-RCM, and hourly dataset based on seven monsoon seasons in the period 2001–2010. The results show that the PU estimate is useful for finding the exceedance probability for a given hydrometric threshold as function of the forecast time up to 24 h, demonstrating the potential usefulness for supporting real-time decision-making. Moreover, the expected value provided by MCP-MT yields better results than the deterministic model predictions, with higher Nash–Sutcliffe coefficients and lower error on stage forecasts, both in term of mean error and standard deviation and root mean square error.
International Nuclear Information System (INIS)
Sato, Hiroaki
2009-01-01
This report addresses a methodology of deep subsurface structure modeling in Niigata plain, Japan to estimate site amplification factor in the broadband frequency range for broadband strong motion prediction. In order to investigate deep S-wave velocity structures, we conduct microtremor array measurements at nine sites in Niigata plain, which are important to estimate both long- and short-period ground motion. The estimated depths of the top of the basement layer agree well with those of the Green tuff formation as well as the Bouguer anomaly distribution. Dispersion characteristics derived from the observed long-period ground motion records are well explained by the theoretical dispersion curves of Love wave group velocities calculated from the estimated subsurface structures. These results demonstrate the deep subsurface structures from microtremor array measurements make it possible to estimate long-period ground motions in Niigata plain. Moreover an applicability of microtremor array exploration for inclined basement structure like a folding structure is shown from the two dimensional finite difference numerical simulations. The short-period site amplification factors in Niigata plain are empirically estimated by the spectral inversion analysis from S-wave parts of strong motion data. The resultant characteristics of site amplification are relative large in the frequency range of about 1.5-5 Hz, and decay significantly with the frequency increasing over about 5 Hz. However, these features can't be explained by the calculations from the deep subsurface structures. The estimation of site amplification factors in the frequency range of about 1.5-5 Hz are improved by introducing a shallow detailed structure down to GL-20m depth at a site. We also propose to consider random fluctuation in a modeling of deep S-wave velocity structure for broadband site amplification factor estimation. The Site amplification in the frequency range higher than about 5 Hz are filtered
Best estimate prediction for LOFT nuclear experiment L3-2
International Nuclear Information System (INIS)
Kee, E.J.; Shinko, M.S.; Grush, W.H.; Condie, K.G.
1980-02-01
Comprehensive analyses using both the RELAP4 and the RELAP5 computer codes were performed to predict the LOFT transient thermal-hydraulic response for nuclear Loss-of-Coolant Experiment L3-2 to be performed in the Loss-of-Fluid Test (LOFT) facility. The LOFT experiment will simulate a small break in one of the cold legs of a large four-loop pressurized water reactor and will be conducted with the LOFT reactor operating at 50 MW. The break in LOCE L3-2 is sized to cause the break flow to be approximately equal to the high-pressure injection system flow at an intermediate pressure of approximately 7.6 MPa
Mandal, S.; Choudhury, B. U.
2015-07-01
Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.
Grandjean, Philippe; Heilmann, Carsten; Weihe, Pal; Nielsen, Flemming; Mogensen, Ulla B; Timmermann, Amalie; Budtz-Jørgensen, Esben
2017-12-01
Perfluorinated alkylate substances (PFASs) are highly persistent and may cause immunotoxic effects. PFAS-associated attenuated antibody responses to childhood vaccines may be affected by PFAS exposures during infancy, where breastfeeding adds to PFAS exposures. Of 490 members of a Faroese birth cohort, 275 and 349 participated in clinical examinations and provided blood samples at ages 18 months and 5 years. PFAS concentrations were measured at birth and at the clinical examinations. Using information on duration of breastfeeding, serum-PFAS concentration profiles during infancy were estimated. As outcomes, serum concentrations of antibodies against tetanus and diphtheria vaccines were determined at age 5. Data from a previous cohort born eight years earlier were available for pooled analyses. Pre-natal exposure showed inverse associations with the antibody concentrations five years later, with decreases by up to about 20% for each two-fold higher exposure, while associations for serum concentrations at ages 18 months and 5 years were weaker. Modeling of serum-PFAS concentration showed levels for age 18 months that were similar to those measured. Concentrations estimated for ages 3 and 6 months showed the strongest inverse associations with antibody concentrations at age 5 years, particularly for tetanus. Joint analyses showed statistically significant decreases in tetanus antibody concentrations by 19-29% at age 5 for each doubling of the PFAS exposure in early infancy. These findings support the notion that the developing adaptive immune system is particularly vulnerable to immunotoxicity during infancy. This vulnerability appears to be the greatest during the first 6 months after birth, where PFAS exposures are affected by breast-feeding.
Karakuła-Juchnowicz, Hanna; Stecka, Mariola
2017-08-29
In view of unavailability in Poland of the standardized methods to measure PIQ, the aim of the work was to develop a Polish test to assess the premorbid level of intelligence - PART(Polish AdultReading Test) and to measureits psychometric properties, such as validity, reliability as well as standardization in the group of schizophrenia patients. The principles of PART construction were based on the idea of popular worldwide National Adult Reading Test by Hazel Nelson. The research comprised a group of 122 subjects (65 schizophrenia patients and 57 healthy people), aged 18-60 years, matched for age and gender. PART appears to be a method with high internal consistency and reliability measured by test-retest, inter-rater reliability, and the method with acceptable diagnostic and prognostic validity. The standardized procedures of PART have been investigated and described. Considering the psychometric values of PART and a short time of its performance, the test may be a useful diagnostic instrument in the assessment of premorbid level of intelligence in a group of schizophrenic patients.
Gender, g, gender identity concepts, and self-constructs as predictors of the self-estimated IQ.
Storek, Josephine; Furnham, Adrian
2013-01-01
In all 102 participants completed 2 intelligence tests, a self-estimated domain-masculine (DMIQ) intelligence rating (which is a composite of self-rated mathematical-logical and spatial intelligence), a measure of self-esteem, and of self-control. The aim was to confirm and extend previous findings about the role of general intelligence and gender identity in self-assessed intelligence. It aimed to examine further correlates of the Hubris-Humility Effect that shows men believe they are more intelligent than women. The DMIQ scores were correlated significantly with gender, psychometrically assessed IQ, and masculinity but not self-esteem or self-control. Stepwise regressions indicated that gender and gender role were the strongest predictors of DMIQ accounting for a third of the variance.
Gender, g, Gender Identity Concepts, and Self-Constructs as Predictors of the Self-Estimated IQ
Storek, Josephine
2013-01-01
In all 102 participants completed 2 intelligence tests, a self-estimated domain-masculine (DMIQ) intelligence rating (which is a composite of self-rated mathematical–logical and spatial intelligence), a measure of self-esteem, and of self-control. The aim was to confirm and extend previous findings about the role of general intelligence and gender identity in self-assessed intelligence. It aimed to examine further correlates of the Hubris–Humility Effect that shows men believe they are more intelligent than women. The DMIQ scores were correlated significantly with gender, psychometrically assessed IQ, and masculinity but not self-esteem or self-control. Stepwise regressions indicated that gender and gender role were the strongest predictors of DMIQ accounting for a third of the variance. PMID:24303578
Energy Technology Data Exchange (ETDEWEB)
Barraj, Leila M. [Chemical Regulation and Food Safety, Exponent, Inc., Suite 1100, 1150 Connecticut Ave., NW, Washington, DC 20036 (United States)], E-mail: lbarraj@exponent.com; Scrafford, Carolyn G. [Chemical Regulation and Food Safety, Exponent, Inc., Suite 1100, 1150 Connecticut Ave., NW, Washington, DC 20036 (United States); Eaton, W. Cary [RTI International, 3040 Cornwallis Road, Research Triangle Park, NC 27709 (United States); Rogers, Robert E.; Jeng, Chwen-Jyh [Toxcon Health Sciences Research Centre Inc., 9607 - 41 Avenue, Edmonton, Alberta, T6E 5X7 (Canada)
2009-04-01
Lumber treated with chromated copper arsenate (CCA) has been used in residential outdoor wood structures and playgrounds. The U.S. EPA has conducted a probabilistic assessment of children's exposure to arsenic from CCA-treated structures using the Stochastic Human Exposure and Dose Simulation model for the wood preservative scenario (SHEDS-Wood). The EPA assessment relied on data from an experimental study using adult volunteers and designed to measure arsenic in maximum hand and wipe loadings. Analyses using arsenic handloading data from a study of children playing on CCA-treated play structures in Edmonton, Canada, indicate that the maximum handloading values significantly overestimate the exposure that occurs during actual play. The objective of our paper is to assess whether the dislodgeable arsenic residues from structures in the Edmonton study are comparable to those observed in other studies and whether they support the conclusion that the values derived by EPA using modeled maximum loading values overestimate hand exposures. We compared dislodgeable arsenic residue data from structures in the playgrounds in the Edmonton study to levels observed in studies used in EPA's assessment. Our analysis showed that the dislodgeable arsenic levels in the Edmonton playground structures are similar to those in the studies used by EPA. Hence, the exposure estimates derived using the handloading data from children playing on CCA-treated structures are more representative of children's actual exposures than the overestimates derived by EPA using modeled maximum values. Handloading data from children playing on CCA-treated structures should be used to reduce the uncertainty of modeled estimates derived using the SHEDS-Wood model.
International Nuclear Information System (INIS)
Williamson, M.
1994-01-01
The paper lists major construction projects in worldwide processing and pipelining, showing capacities, contractors, estimated costs, and time of construction. The lists are divided into refineries, petrochemical plants, sulfur recovery units, gas processing plants, pipelines, and related fuel facilities. This last classification includes cogeneration plants, coal liquefaction and gasification plants, biomass power plants, geothermal power plants, integrated coal gasification combined-cycle power plants, and a coal briquetting plant
Directory of Open Access Journals (Sweden)
Xiaodan Tan
2017-12-01
Full Text Available The auditory steady-state response (ASSR is one of the main approaches in clinic for health screening and frequency-specific hearing assessment. However, its generation mechanism is still of much controversy. In the present study, the linear superposition hypothesis for the generation of ASSRs was investigated by comparing the relationships between the classical 40 Hz ASSR and three synthetic ASSRs obtained from three different templates for transient auditory evoked potential (AEP. These three AEPs are the traditional AEP at 5 Hz and two 40 Hz AEPs derived from two deconvolution algorithms using stimulus sequences, i.e., continuous loop averaging deconvolution (CLAD and multi-rate steady-state average deconvolution (MSAD. CLAD requires irregular inter-stimulus intervals (ISIs in the sequence while MSAD uses the same ISIs but evenly-spaced stimulus sequences which mimics the classical 40 Hz ASSR. It has been reported that these reconstructed templates show similar patterns but significant difference in morphology and distinct frequency characteristics in synthetic ASSRs. The prediction accuracies of ASSR using these templates show significant differences (p < 0.05 in 45.95, 36.28, and 10.84% of total time points within four cycles of ASSR for the traditional, CLAD, and MSAD templates, respectively, as compared with the classical 40 Hz ASSR, and the ASSR synthesized from the MSAD transient AEP suggests the best similarity. And such a similarity is also demonstrated at individuals only in MSAD showing no statistically significant difference (Hotelling's T2 test, T2 = 6.96, F = 0.80, p = 0.592 as compared with the classical 40 Hz ASSR. The present results indicate that both stimulation rate and sequencing factor (ISI variation affect transient AEP reconstructions from steady-state stimulation protocols. Furthermore, both auditory brainstem response (ABR and middle latency response (MLR are observed in contributing to the composition of ASSR but
International Nuclear Information System (INIS)
Blaylock, B.G.; Witherspoon, J.P.
1975-01-01
Aquatic organisms are exposed to radionuclides released to the environment during various steps of the nuclear fuel cycle. Routine releases from these processes are limited in compliance with technical specifications and requirements of federal regulations. These regulations reflect I.C.R.P. recommendations which are designed to provide an environment considered safe for man. It is generally accepted that aquatic organisms will not receive damaging external radiation doses in such environments; however, because of possible bioaccumulation of radionuclides there is concern that aquatic organisms might be adversely affected by internal doses. The objectives of this paper are: to estimate the radiation dose received by aquatic biota from the different processes and determine the major dose-contributing radionuclides, and to assess the impact of estimated doses on aquatic biota. Dose estimates are made by using radionuclide concentration measured in the liquid effluents of representative facilities. This evaluation indicates the potential for the greatest radiation dose to aquatic biota from the nuclear fuel supply facilities (i.e., uranium mining and milling). The effects of chronic low-level radiation on aquatic organisms are discussed from somatic and genetic viewpoints. Based on the body of radiobiological evidence accumulated up to the present time, no significant deleterious effects are predicted for populations of aquatic organisms exposed to the estimated dose rates resulting from routine releases from conversion, enrichment, fabrication, reactors and reprocessing facilities. At the doses estimated for milling and mining operations it would be difficult to detect radiation effects on aquatic populations; however, the significance of such radiation exposures to aquatic populations cannot be fully evaluated without further research on effects of chronic low-level radiation. (U.S.)
Estimates and Predictions of Methane Emissions from Wastewater in China from 2000 to 2020
Du, Mingxi; Zhu, Qiuan; Wang, Xiaoge; Li, Peng; Yang, Bin; Chen, Huai; Wang, Meng; Zhou, Xiaolu; Peng, Changhui
2018-02-01
Methane accounts for 20% of the global warming caused by greenhouse gases, and wastewater is a major anthropogenic source of methane. Based on the Intergovernmental Panel on Climate Change greenhouse gas inventory guidelines and current research findings, we calculated the amount of methane emissions from 2000 to 2014 that originated from wastewater from different provinces in China. Methane emissions from wastewater increased from 1349.01 to 3430.03 Gg from 2000 to 2014, and the mean annual increase was 167.69 Gg. The methane emissions from industrial wastewater treated by wastewater treatment plants (EIt) accounted for the highest proportion of emissions. We also estimated the future trend of industrial wastewater methane emissions using the artificial neural network model. A comparison of the emissions for the years 2020, 2010, and 2000 showed an increasing trend in methane emissions in China and a spatial transition of industrial wastewater emissions from eastern and southern regions to central and southwestern regions and from coastal regions to inland regions. These changes were caused by changes in economics, demographics, and relevant policies.
Directory of Open Access Journals (Sweden)
Quentin Noirhomme
2014-01-01
Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.
Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven
2014-01-01
Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.
Directory of Open Access Journals (Sweden)
Vencita Priyanka Aranha
2017-01-01
Full Text Available Objectives: Motor cognitive processing speed (MCPS is often reported in terms of reaction time. In spite of being a significant indicator of function, behavior, and performance, MCPS is rarely used in clinics and schools to identify kids with slowed motor cognitive processing. The reason behind this is the lack of availability of convenient formula to estimate MCPS. Thereby, the aim of this study is to estimate the MCPS in the primary schoolchildren. Materials and Methods: Two hundred and four primary schoolchildren, aged 6–12 years, were recruited by the cluster sampling method for this cross-sectional study. MCPS was estimated by the ruler drop method (RDM. By this method, a metallic stainless steel ruler was suspended vertically such that 5 cm graduation of the lower was aligned between the web space of the child's hand, and the child was asked to catch the moving ruler as quickly as possible, once released from the examiner's hand. Distance the ruler traveled was recorded and converted into time, which is the MCPS. Multiple regression analysis of variables was performed to determine the influence of independent variables on MCPS. Results: Mean MCPS of the entire sample of 204 primary schoolchildren is 230.01 ms ± 26.5 standard deviation (95% confidence interval; 226.4–233.7 ms that ranged from 162.9 to 321.6 ms. By stepwise regression analysis, we derived the regression equation, MCPS (ms = 279.625–5.495 × age, with 41.3% (R = 0.413 predictability and 17.1% (R2 = 0.171 and adjusted R2 = 0.166 variability. Conclusion: MCPS prediction formula through RDM in the primary schoolchildren has been established.
Estimated VO2max and its corresponding velocity predict performance of amateur runners
Directory of Open Access Journals (Sweden)
Bruno Ribeiro Ramalho Oliveira
2012-03-01
Full Text Available In recent years, there has been a substantial increase in the number of runners, with a proportional increase in their involvement in amateur street competition. Identification of the determinants of performance in this population appears necessary for optimization of time devoted to training. The objective of this study was to ascertain the association between estimated maximal oxygen uptake (VO2max, critical velocity (CV and VO2max velocity (VVO2max and athletic performance in the 3.6 km (uphill and 10 and 21.1 km (flatland events. Twelve amateur runners (nine male, mean age 36 ± 5 years underwent five tests: 1 and 5 km race on level ground, 3.6 km race with slope (≈8%, and indirect VO2max measurement. CV was determined from the linear relationship between distance and run time on the first two tests. The subjects then took part in two official 10 km and 21.1 km (half marathon races. VVO2max was calculated from the VO2max through a metabolic equation. VO2max showed the best association with running performance in the 10 and 21.1 km events. For the uphill race, VVO2max showed a better association. Overall, the variable with the highest average association was VO2max (0.91±0.07, followed by VVO2max (0.90±0.04 and VC (0.87±0.06. This study showed strong associations between physiological variables established by low-cost, user-friendly indirect methods and running performance in the 10 and 21.1 km (flatland and 3.6 km (uphill running events.
Directory of Open Access Journals (Sweden)
A. Yakubu
2014-10-01
Full Text Available The study was aimed to develop prediction models using stepwise multiple linear regressionanalysis for estimating the body condition score (BCS from the body weight (BW, testicular length(TL, testicular diameter (TD and scrotal circumference (SC of indigenous Yankasa rams. Data wereobtained from 120 randomly selected rams with approximately two and half years of age, from differentextensively managed herds in Nasarawa State, Nigeria. Although pairwise phenotypic correlationsindicated strong association (P<0.01 among the measured variables, there was collinearity problembetween BW and SC as revealed by the variance inflation factors (VIF and tolerance valves (T. TheVIT was higher than 10 (VIF = 19.45 and 16.65 for BW and SC, respectively. The Twas smaller than0.1 (T = 0.05 and 0.06 in BW and SC, respectively. BW was retained among the collinear variables, andwas singly accounted for 83.7% of the variation in BCS. However, a slight improvement was obtainedfrom the prediction of BCS from BW and TL [coefficient of determination (R2, adjusted R2 and rootmean squares error (RMSE were 85.3%, 85.1% and 0.305, respectively]. The prediction of the BCS ofYankasa rams from BW and testicular measurements could therefore be a potential tool for sustainableproduction and improvement of small ruminants in Nigeria.
Prediction of embankment settlement over soft soils.
2009-06-01
The objective of this project was to review and verify the current design procedures used by TxDOT : to estimate the total and rate of consolidation settlement in embankments constructed on soft soils. Methods : to improve the settlement predictions ...
Eaton, John E; Vesterhus, Mette; McCauley, Bryan M; Atkinson, Elizabeth J; Schlicht, Erik M; Juran, Brian D; Gossard, Andrea A; LaRusso, Nicholas F; Gores, Gregory J; Karlsen, Tom H; Lazaridis, Konstantinos N
2018-05-09
Improved methods are needed to risk stratify and predict outcomes in patients with primary sclerosing cholangitis (PSC). Therefore, we sought to derive and validate a new prediction model and compare its performance to existing surrogate markers. The model was derived using 509 subjects from a multicenter North American cohort and validated in an international multicenter cohort (n=278). Gradient boosting, a machine based learning technique, was used to create the model. The endpoint was hepatic decompensation (ascites, variceal hemorrhage or encephalopathy). Subjects with advanced PSC or cholangiocarcinoma at baseline were excluded. The PSC risk estimate tool (PREsTo) consists of 9 variables: bilirubin, albumin, serum alkaline phosphatase (SAP) times the upper limit of normal (ULN), platelets, AST, hemoglobin, sodium, patient age and the number of years since PSC was diagnosed. Validation in an independent cohort confirms PREsTo accurately predicts decompensation (C statistic 0.90, 95% confidence interval (CI) 0.84-0.95) and performed well compared to MELD score (C statistic 0.72, 95% CI 0.57-0.84), Mayo PSC risk score (C statistic 0.85, 95% CI 0.77-0.92) and SAP statistic 0.65, 95% CI 0.55-0.73). PREsTo continued to be accurate among individuals with a bilirubin statistic 0.90, 95% CI 0.82-0.96) and when the score was re-applied at a later course in the disease (C statistic 0.82, 95% CI 0.64-0.95). PREsTo accurately predicts hepatic decompensation in PSC and exceeds the performance among other widely available, noninvasive prognostic scoring systems. This article is protected by copyright. All rights reserved. © 2018 by the American Association for the Study of Liver Diseases.
International Nuclear Information System (INIS)
Kondratyuk, T.P.; Shklyarevich, O.B.
1993-01-01
The results of estimating the impact of artificial water bodies in regions of construction of hydroelectric stations on the micro- and mesoclimatic characteristics of the surrounding territory are given
Dafonte, C.; Fustes, D.; Manteiga, M.; Garabato, D.; Álvarez, M. A.; Ulla, A.; Allende Prieto, C.
2016-10-01
Aims: We present an innovative artificial neural network (ANN) architecture, called Generative ANN (GANN), that computes the forward model, that is it learns the function that relates the unknown outputs (stellar atmospheric parameters, in this case) to the given inputs (spectra). Such a model can be integrated in a Bayesian framework to estimate the posterior distribution of the outputs. Methods: The architecture of the GANN follows the same scheme as a normal ANN, but with the inputs and outputs inverted. We train the network with the set of atmospheric parameters (Teff, log g, [Fe/H] and [α/ Fe]), obtaining the stellar spectra for such inputs. The residuals between the spectra in the grid and the estimated spectra are minimized using a validation dataset to keep solutions as general as possible. Results: The performance of both conventional ANNs and GANNs to estimate the stellar parameters as a function of the star brightness is presented and compared for different Galactic populations. GANNs provide significantly improved parameterizations for early and intermediate spectral types with rich and intermediate metallicities. The behaviour of both algorithms is very similar for our sample of late-type stars, obtaining residuals in the derivation of [Fe/H] and [α/ Fe] below 0.1 dex for stars with Gaia magnitude Grvs satellite. Conclusions: Uncertainty estimation of computed astrophysical parameters is crucial for the validation of the parameterization itself and for the subsequent exploitation by the astronomical community. GANNs produce not only the parameters for a given spectrum, but a goodness-of-fit between the observed spectrum and the predicted one for a given set of parameters. Moreover, they allow us to obtain the full posterior distribution over the astrophysical parameters space once a noise model is assumed. This can be used for novelty detection and quality assessment.
Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo
2009-01-01
New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...
Zoellner, Jamie M; Porter, Kathleen J; Chen, Yvonnes; Hedrick, Valisa E; You, Wen; Hickman, Maja; Estabrooks, Paul A
2017-05-01
Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13-20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6-38%) and behaviour (average 30%, range 6-55%) were significant. Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases.
Zoellner, Jamie M.; Porter, Kathleen J.; Chen, Yvonnes; Hedrick, Valisa E.; You, Wen; Hickman, Maja; Estabrooks, Paul A.
2017-01-01
Objective Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Design Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Main Outcome Measures Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. Results TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13–20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6–38%) and behaviour (average 30%, range 6–55%) were significant. Conclusion Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases. PMID:28165771
DEFF Research Database (Denmark)
Andersen, Trine Borup; Jødal, Lars; Bøgsted, Martin
) aged 2-14 years (mean 8.8 years). GFR was 14-147 mL/min/1.73m2 (mean 97 mL/min/1.73m2). BCM was estimated using bioimpedance spectroscopy (Xitron Hydra 4200). Log-transformed data on BCM/CysC, serum creatinine (SCr), body-surface-area (BSA), height x BSA/SCr, serum CysC, weight, sex, age, height, serum....... The present equation also had the highest R2 and the narrowest 95% limits of agreement. CONCLUSION: The new equation predicts GFR with higher accuracy than other equations. Endogenous methods are, however, still not accurate enough to replace exogenous markers when GFR must be determined with high accuracy...
DEFF Research Database (Denmark)
Andersen, Trine Borup; Jødal, Lars; Bøgsted, Martin
) aged 2-14 years (mean 8.8 years). GFR was 14-147 mL/min/1.73m2 (mean 97 mL/min/1.73m2). BCM was estimated using bioimpedance spectroscopy (Xitron Hydra 4200). Log-transformed data on BCM/CysC, serum creatinine (SCr), body-surface-area (BSA), height x BSA/SCr, serum CysC, weight, sex, age, height, serum....... The present equation also had the highest R2 and the narrowest 95% limits of agreement. CONCLUSION: The new equation predicts GFR with higher accuracy than other equations. Endogenous methods are, however, still not accurate enough to replace exogenous markers when GFR must be determined with high accuracy...
International Nuclear Information System (INIS)
Taleyarkhan, R.; McFarlane, A.F.; Lahey, R.T. Jr.; Podowski, M.Z.
1988-01-01
The work described in this paper is focused on the development, verification and benchmarking of the NUFREQ-NPW code at Westinghouse, USA for best estimate prediction of multi-channel core stability margins in US BWRs. Various models incorporated into NUFREQ-NPW are systematically compared against the Westinghouse channel stability analysis code MAZDA, which the Mathematical Model was developed in an entirely different manner. The NUFREQ-NPW code is extensively benchmarked against experimental stability data with and without nuclear reactivity feedback. Detailed comparisons are next performed against nuclear-coupled core stability data. A physically based algorithm is developed to correct for the effect of flow development on subcooled boiling. Use of this algorithm (to be described in the full paper) captures the peak magnitude as well as the resonance frequency with good accuracy
Energy Technology Data Exchange (ETDEWEB)
Lee, Jared A.; Hacker, Joshua P.; Delle Monache, Luca; Kosović, Branko; Clifton, Andrew; Vandenberghe, Francois; Rodrigo, Javier Sanz
2016-12-14
A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this study, we use the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts.
Directory of Open Access Journals (Sweden)
Enrique Castillo
2015-01-01
Full Text Available A state-of-the-art review of flow observability, estimation, and prediction problems in traffic networks is performed. Since mathematical optimization provides a general framework for all of them, an integrated approach is used to perform the analysis of these problems and consider them as different optimization problems whose data, variables, constraints, and objective functions are the main elements that characterize the problems proposed by different authors. For example, counted, scanned or “a priori” data are the most common data sources; conservation laws, flow nonnegativity, link capacity, flow definition, observation, flow propagation, and specific model requirements form the most common constraints; and least squares, likelihood, possible relative error, mean absolute relative error, and so forth constitute the bases for the objective functions or metrics. The high number of possible combinations of these elements justifies the existence of a wide collection of methods for analyzing static and dynamic situations.
Directory of Open Access Journals (Sweden)
Boulesteix Anne-Laure
2009-12-01
Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.
Paul, Suman; Ali, Muhammad; Chatterjee, Rima
2018-01-01
Velocity of compressional wave ( V P) of coal and non-coal lithology is predicted from five wells from the Bokaro coalfield (CF), India. Shear sonic travel time logs are not recorded for all wells under the study area. Shear wave velocity ( Vs) is available only for two wells: one from east and other from west Bokaro CF. The major lithologies of this CF are dominated by coal, shaly coal of Barakar formation. This paper focuses on the (a) relationship between Vp and Vs, (b) prediction of Vp using regression and neural network modeling and (c) estimation of maximum horizontal stress from image log. Coal characterizes with low acoustic impedance (AI) as compared to the overlying and underlying strata. The cross-plot between AI and Vp/ Vs is able to identify coal, shaly coal, shale and sandstone from wells in Bokaro CF. The relationship between Vp and Vs is obtained with excellent goodness of fit ( R 2) ranging from 0.90 to 0.93. Linear multiple regression and multi-layered feed-forward neural network (MLFN) models are developed for prediction Vp from two wells using four input log parameters: gamma ray, resistivity, bulk density and neutron porosity. Regression model predicted Vp shows poor fit (from R 2 = 0.28) to good fit ( R 2 = 0.79) with the observed velocity. MLFN model predicted Vp indicates satisfactory to good R2 values varying from 0.62 to 0.92 with the observed velocity. Maximum horizontal stress orientation from a well at west Bokaro CF is studied from Formation Micro-Imager (FMI) log. Breakouts and drilling-induced fractures (DIFs) are identified from the FMI log. Breakout length of 4.5 m is oriented towards N60°W whereas the orientation of DIFs for a cumulative length of 26.5 m is varying from N15°E to N35°E. The mean maximum horizontal stress in this CF is towards N28°E.
Roland, Nathalie; Mierop, Adrien; Frenay, Mariane; Corneille, Olivier
2018-01-01
Ajzen and Dasgupta (2015) recently invited complementing Theory of Planned Behavior (TPB) measures with measures borrowed from implicit cognition research. In this study, we examined for the first time such combination, and we did so to predict academic persistence. Specifically, 169 first-year college students answered a TPB questionnaire and…
Bueno, Marta; Camacho, Carlos J; Sancho, Javier
2007-09-01
The bioinformatics revolution of the last decade has been instrumental in the development of empirical potentials to quantitatively estimate protein interactions for modeling and design. Although computationally efficient, these potentials hide most of the relevant thermodynamics in 5-to-40 parameters that are fitted against a large experimental database. Here, we revisit this longstanding problem and show that a careful consideration of the change in hydrophobicity, electrostatics, and configurational entropy between the folded and unfolded state of aliphatic point mutations predicts 20-30% less false positives and yields more accurate predictions than any published empirical energy function. This significant improvement is achieved with essentially no free parameters, validating past theoretical and experimental efforts to understand the thermodynamics of protein folding. Our first principle analysis strongly suggests that both the solute-solute van der Waals interactions in the folded state and the electrostatics free energy change of exposed aliphatic mutations are almost completely compensated by similar interactions operating in the unfolded ensemble. Not surprisingly, the problem of properly accounting for the solvent contribution to the free energy of polar and charged group mutations, as well as of mutations that disrupt the protein backbone remains open. 2007 Wiley-Liss, Inc.
Directory of Open Access Journals (Sweden)
Muhammad Z. A. Durrani
2014-01-01
Full Text Available Due to the complex nature, deriving elastic properties from seismic data for the prolific Granite Wash reservoir (Pennsylvanian age in the western Anadarko Basin Wheeler County (Texas is quite a challenge. In this paper, we used rock physics tool to describe the diagenesis and accurate estimation of seismic velocities of P and S waves in Granite Wash reservoir. Hertz-Mindlin and Cementation (Dvorkin’s theories are applied to analyze the nature of the reservoir rocks (uncemented and cemented. In the implementation of rock physics diagnostics, three classical rock physics (empirical relations, Kuster-Toksöz, and Berryman models are comparatively analyzed for velocity prediction taking into account the pore shape geometry. An empirical (VP-VS relationship is also generated calibrated with core data for shear wave velocity prediction. Finally, we discussed the advantages of each rock physics model in detail. In addition, cross-plots of unconventional attributes help us in the clear separation of anomalous zone and lithologic properties of sand and shale facies over conventional attributes.
International Nuclear Information System (INIS)
Agostinetti, P.; Palma, M. Dalla; Fantini, F.; Fellin, F.; Pasqualotto, R.
2011-01-01
The analytical interpretative models for calorimetric measurements currently available in the literature can consider close systems in steady-state and transient conditions, or open systems but only in steady-state conditions. The PCCE code (Predictive Code for Calorimetric Estimations), here presented, introduces some novelties. In fact, it can simulate with an analytical approach both the heated component and the cooling circuit, evaluating the heat fluxes due to conductive and convective processes both in steady-state and transient conditions. The main goal of this code is to model heating and cooling processes in actively cooled components of fusion experiments affected by high pulsed power loads, that are not easily analyzed with purely numerical approaches (like Finite Element Method or Computational Fluid Dynamics). A dedicated mathematical formulation, based on concentrated parameters, has been developed and is here described in detail. After a comparison and benchmark with the ANSYS commercial code, the PCCE code is applied to predict the calorimetric parameters in simple scenarios of the SPIDER experiment.
You, Jongmin; Jeong, Jechang
2010-02-01
The H.264/AVC (advanced video coding) is used in a wide variety of applications including digital broadcasting and mobile applications, because of its high compression efficiency. The variable block mode scheme in H.264/AVC contributes much to its high compression efficiency but causes a selection problem. In general, rate-distortion optimization (RDO) is the optimal mode selection strategy, but it is computationally intensive. For this reason, the H.264/AVC encoder requires a fast mode selection algorithm for use in applications that require low-power and real-time processing. A probable mode prediction algorithm for the H.264/AVC encoder is proposed. To reduce the computational complexity of RDO, the proposed method selects probable modes among all allowed block modes using removable SKIP mode distortion estimation. Removable SKIP mode distortion is used to estimate whether or not a further divided block mode is appropriate for a macroblock. It is calculated using a no-motion reference block with a few computations. Then the proposed method reduces complexity by performing the RDO process only for probable modes. Experimental results show that the proposed algorithm can reduce encoding time by an average of 55.22% without significant visual quality degradation and increased bit rate.
McCoy, Dana Charles; Peet, Evan D; Ezzati, Majid; Danaei, Goodarz; Black, Maureen M; Sudfeld, Christopher R; Fawzi, Wafaie; Fink, Günther
2016-06-01
The development of cognitive and socioemotional skills early in life influences later health and well-being. Existing estimates of unmet developmental potential in low- and middle-income countries (LMICs) are based on either measures of physical growth or proxy measures such as poverty. In this paper we aim to directly estimate the number of children in LMICs who would be reported by their caregivers to show low cognitive and/or socioemotional development. The present paper uses Early Childhood Development Index (ECDI) data collected between 2005 and 2015 from 99,222 3- and 4-y-old children living in 35 LMICs as part of the Multiple Indicator Cluster Survey (MICS) and Demographic and Health Surveys (DHS) programs. First, we estimate the prevalence of low cognitive and/or socioemotional ECDI scores within our MICS/DHS sample. Next, we test a series of ordinary least squares regression models predicting low ECDI scores across our MICS/DHS sample countries based on country-level data from the Human Development Index (HDI) and the Nutrition Impact Model Study. We use cross-validation to select the model with the best predictive validity. We then apply this model to all LMICs to generate country-level estimates of the prevalence of low ECDI scores globally, as well as confidence intervals around these estimates. In the pooled MICS and DHS sample, 14.6% of children had low ECDI scores in the cognitive domain, 26.2% had low socioemotional scores, and 36.8% performed poorly in either or both domains. Country-level prevalence of low cognitive and/or socioemotional scores on the ECDI was best represented by a model using the HDI as a predictor. Applying this model to all LMICs, we estimate that 80.8 million children ages 3 and 4 y (95% CI 48.1 million, 113.6 million) in LMICs experienced low cognitive and/or socioemotional development in 2010, with the largest number of affected children in sub-Saharan Africa (29.4.1 million; 43.8% of children ages 3 and 4 y), followed by
Directory of Open Access Journals (Sweden)
Dana Charles McCoy
2016-06-01
Full Text Available The development of cognitive and socioemotional skills early in life influences later health and well-being. Existing estimates of unmet developmental potential in low- and middle-income countries (LMICs are based on either measures of physical growth or proxy measures such as poverty. In this paper we aim to directly estimate the number of children in LMICs who would be reported by their caregivers to show low cognitive and/or socioemotional development.The present paper uses Early Childhood Development Index (ECDI data collected between 2005 and 2015 from 99,222 3- and 4-y-old children living in 35 LMICs as part of the Multiple Indicator Cluster Survey (MICS and Demographic and Health Surveys (DHS programs. First, we estimate the prevalence of low cognitive and/or socioemotional ECDI scores within our MICS/DHS sample. Next, we test a series of ordinary least squares regression models predicting low ECDI scores across our MICS/DHS sample countries based on country-level data from the Human Development Index (HDI and the Nutrition Impact Model Study. We use cross-validation to select the model with the best predictive validity. We then apply this model to all LMICs to generate country-level estimates of the prevalence of low ECDI scores globally, as well as confidence intervals around these estimates. In the pooled MICS and DHS sample, 14.6% of children had low ECDI scores in the cognitive domain, 26.2% had low socioemotional scores, and 36.8% performed poorly in either or both domains. Country-level prevalence of low cognitive and/or socioemotional scores on the ECDI was best represented by a model using the HDI as a predictor. Applying this model to all LMICs, we estimate that 80.8 million children ages 3 and 4 y (95% CI 48.1 million, 113.6 million in LMICs experienced low cognitive and/or socioemotional development in 2010, with the largest number of affected children in sub-Saharan Africa (29.4.1 million; 43.8% of children ages 3 and 4 y
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Wirth, Christian; Schumacher, Jens; Schulze, Ernst-Detlef
2004-02-01
To facilitate future carbon and nutrient inventories, we used mixed-effect linear models to develop new generic biomass functions for Norway spruce (Picea abies (L.) Karst.) in Central Europe. We present both the functions and their respective variance-covariance matrices and illustrate their application for biomass prediction and uncertainty estimation for Norway spruce trees ranging widely in size, age, competitive status and site. We collected biomass data for 688 trees sampled in 102 stands by 19 authors. The total number of trees in the "base" model data sets containing the predictor variables diameter at breast height (D), height (H), age (A), site index (SI) and site elevation (HSL) varied according to compartment (roots: n = 114, stem: n = 235, dry branches: n = 207, live branches: n = 429 and needles: n = 551). "Core" data sets with about 40% fewer trees could be extracted containing the additional predictor variables crown length and social class. A set of 43 candidate models representing combinations of lnD, lnH, lnA, SI and HSL, including second-order polynomials and interactions, was established. The categorical variable "author" subsuming mainly methodological differences was included as a random effect in a mixed linear model. The Akaike Information Criterion was used for model selection. The best models for stem, root and branch biomass contained only combinations of D, H and A as predictors. More complex models that included site-related variables resulted for needle biomass. Adding crown length as a predictor for needles, branches and roots reduced both the bias and the confidence interval of predictions substantially. Applying the best models to a test data set of 17 stands ranging in age from 16 to 172 years produced realistic allocation patterns at the tree and stand levels. The 95% confidence intervals (% of mean prediction) were highest for crown compartments (approximately +/- 12%) and lowest for stem biomass (approximately +/- 5%), and
Directory of Open Access Journals (Sweden)
Mohammad Masum Alam
2016-07-01
Full Text Available Background: Glomerular filtration rate is an effective tool for diagnosis and staging of chronic kidney disease. The effect ofrenal insufficiency by different method of this tool among patients with CKD is controversial.Objective: The objective of this study was to evaluate the performance of eGFR in staging of CKD compared to gamma camera based GFR.Methods: This cross sectional analytical study was conducted in the Department of Biochemistry Bangabandhu Sheikh Mujib Medical University (BSMMU with the collaboration with National Institute of Nuclear Medicine and Allied Sciences, BSMMU during the period of January 2011 to December 2012. Gama camera based GFR was estimated from DTP A reno gram and eGFR was estimated by three prediction equations. Comparison was done by Bland Altman agreement test to see the agreement on the measurement of GFR between three equation based eGFR method and gama camera based GFR method. Staging comparison was done by Kappa analysis to see the agreement between the stages identified by those different methods.Results: Bland-Altman agreement analysis between GFR measured by gamma camera, CG equation ,CG equation corrected by BSA and MDRD equation shows statistically significant. CKD stages determined by CG GFR, CG GFR corrected by BSA , MDRD GFR and gamma camera based GFR was compared by Kappa statistical analysis .The kappa value was 0.66, 0.77 and 0.79 respectively.Conclusions: This study findings suggest that GFR estimation by MDRD equation in CKD patients shows good agreement with gamma camera based GFR and for staging of CKD patients, eGFR by MDRD formula may be used as very effective tool in Bangladeshi population.
DEFF Research Database (Denmark)
Belter, Klaus; Engsted, Tom; Tanggaard, Carsten
2005-01-01
is given. In the second part of the paper we analyze the time-series properties of daily, weekly, and monthly returns, and we present evidence on predictability of multi-period returns. We also compare stock returns with the returns on long-term bonds and short-term money market instruments (that is......We present a new dividend-adjusted blue chip index for the Danish stock market covering the period 1985-2002. In contrast to other indices on the Danish stock market, the index is calculated on a daily basis. In the first part of the paper a detailed description of the construction of the index...
Shira Barzilay; Zimri S. Yaseen; Zimri S. Yaseen; Mariah Hawes; Bernard Gorman; Rachel Altman; Adriana Foster; Alan Apter; Paul Rosenfield; Igor Galynker; Igor Galynker
2018-01-01
BackgroundMental health professionals have a pivotal role in suicide prevention. However, they also often have intense emotional responses, or countertransference, during encounters with suicidal patients. Previous studies of the Therapist Response Questionnaire-Suicide Form (TRQ-SF), a brief novel measure aimed at probing a distinct set of suicide-related emotional responses to patients found it to be predictive of near-term suicidal behavior among high suicide-risk inpatients. The purpose o...
Directory of Open Access Journals (Sweden)
Eleonora Carletti
2016-11-01
Full Text Available It is well-known that the reduction of noise levels is not strictly linked to the reduction of noise annoyance. Even earthmoving machine manufacturers are facing the problem of customer complaints concerning the noise quality of their machines with increasing frequency. Unfortunately, all the studies geared to the understanding of the relationship between multidimensional characteristics of noise signals and the auditory perception of annoyance require repeated sessions of jury listening tests, which are time-consuming. In this respect, an annoyance prediction model was developed for compact loaders to assess the annoyance sensation perceived by operators at their workplaces without repeating the full sound quality assessment but using objective parameters only. This paper aims at verifying the feasibility of the developed annoyance prediction model when applied to other kinds of earthmoving machines. For this purpose, an experimental investigation was performed on five earthmoving machines, different in type, dimension, and engine mechanical power, and the annoyance predicted by the numerical model was compared to the annoyance given by subjective listening tests. The results were evaluated by means of the squared value of the correlation coefficient, R2, and they confirm the possible applicability of the model to other kinds of machines.
Directory of Open Access Journals (Sweden)
Sadegh Maleki
2013-11-01
Full Text Available The goal of this study was to present regression models for predicting resistance of joints made with screw and plywood members. Joint members were out of hardwood plywood that were 19 mm in thickness. Two types of screws including coarse and fine thread drywall screw with 3.5, 4 and 5mm in diameter and sheet metal screw with 4 and 5mm were used. Results have shown that withdrawal resistance of screw was increased by increasing of screws, diameter and penetrating depth. Joints fabricated with coarse thread drywall screws were higher than those of fine thread drywall screws. Finally, average joint withdrawal resistance of screwed could be predicted by means of the expressions Wc=2.127×D1.072×P0.520 for coarse thread drywall screws and Wf=1.377×D1.156×P0.581 for fine thread drywall screws by taking account the diameter and penetrating depth. The difference of the observed and predicted data showed that developed models have a good correlation with actual experimental measurements.
Miranda, J; Rodriguez-Lopez, M; Triunfo, S; Sairanen, M; Kouru, H; Parra-Saavedra, M; Crovetto, F; Figueras, F; Crispi, F; Gratacós, E
2017-11-01
To compare the performance of third-trimester screening, based on estimated fetal weight centile (EFWc) vs a combined model including maternal baseline characteristics, fetoplacental ultrasound and maternal biochemical markers, for the prediction of small-for-gestational-age (SGA) neonates and late-onset fetal growth restriction (FGR). This was a nested case-control study within a prospective cohort of 1590 singleton gestations undergoing third-trimester (32 + 0 to 36 + 6 weeks' gestation) evaluation. Maternal baseline characteristics, mean arterial pressure, fetoplacental ultrasound and circulating biochemical markers (placental growth factor (PlGF), lipocalin-2, unconjugated estriol and inhibin A) were assessed in all women who subsequently delivered a SGA neonate (n = 175), defined as birth weight < 10 th centile according to customized standards, and in a control group (n = 875). Among SGA cases, those with birth weight < 3 rd centile and/or abnormal uterine artery pulsatility index (UtA-PI) and/or abnormal cerebroplacental ratio (CPR) were classified as FGR. Logistic regression predictive models were developed for SGA and FGR, and their performance was compared with that obtained using EFWc alone. In SGA cases, EFWc, CPR Z-score and maternal serum concentrations of unconjugated estriol and PlGF were significantly lower, while mean UtA-PI Z-score and lipocalin-2 and inhibin A concentrations were significantly higher, compared with controls. Using EFWc alone, 52% (area under receiver-operating characteristics curve (AUC), 0.82 (95% CI, 0.77-0.85)) of SGA and 64% (AUC, 0.86 (95% CI, 0.81-0.91)) of FGR cases were predicted at a 10% false-positive rate. A combined screening model including a-priori risk (maternal characteristics), EFWc, UtA-PI, PlGF and estriol (with lipocalin-2 for SGA) achieved a detection rate of 61% (AUC, 0.86 (95% CI, 0.83-0.89)) for SGA cases and 77% (AUC, 0.92 (95% CI, 0.88-0.95)) for FGR. The combined model for the
Lamart, Stephanie; Imran, Rebecca; Simon, Steven L.; Doi, Kazutaka; Morton, Lindsay M.; Curtis, Rochelle E.; Lee, Choonik; Drozdovitch, Vladimir; Maass-Moreno, Roberto; Chen, Clara C.; Whatley, Millie; Miller, Donald L.; Pacak, Karel; Lee, Choonsik
2013-12-01
Following cancer radiotherapy, reconstruction of doses to organs, other than the target organ, is of interest for retrospective health risk studies. Reliable estimation of doses to organs that may be partially within or fully outside the treatment field requires reliable knowledge of the location and size of the organs, e.g., the stomach, which is at risk from abdominal irradiation. The stomach location and size are known to be highly variable between individuals, but have been little studied. Moreover, for treatments conducted years ago, medical images of patients are usually not available in medical records to locate the stomach. In light of the poor information available to locate the stomach in historical dose reconstructions, the purpose of this work was to investigate the variability of stomach location and size among adult male patients and to develop prediction models for the stomach location and size using predictor variables generally available in medical records of radiotherapy patients treated in the past. To collect data on stomach size and position, we segmented the contours of the stomach and of the skeleton on contemporary computed tomography (CT) images for 30 male patients in supine position. The location and size of the stomach was found to depend on body mass index (BMI), ponderal index (PI), and age. For example, the anteroposterior dimension of the stomach was found to increase with increasing BMI (≈0.25 cm kg-1 m2) whereas its craniocaudal dimension decreased with increasing PI (≈-3.3 cm kg-1 m3) and its transverse dimension increased with increasing PI (≈2.5 cm kg-1 m3). Using the prediction models, we generated three-dimensional computational stomach models from a deformable hybrid phantom for three patients of different BMI. Based on a typical radiotherapy treatment, we simulated radiotherapy treatments on the predicted stomach models and on the CT images of the corresponding patients. Those dose calculations demonstrated good
International Nuclear Information System (INIS)
Lee, Jang-Soo; Cho, Sung-Jin; Jung, Hae-Young; Lee, Ki-Bae; Seo, Yong-Chil
2010-01-01
In 2007 total waste generation rate in Korea was 318,670 ton,day. In general waste generation rate shows rising trend since 2000. Wastes are composed of municipal waste 14.9 % industrial waste 34.1 % and construction waste 51.0 %. Treatment of wastes by recycling was 81.1 % landfill 11.1 % incineration 5.3 % and ocean dumping 2.4 %. National waste energy policies have been influenced by various factors such as environmental problem economy technology level (could be made energy), and so on. Korea has the worlds third dense population density environmental pollution load per unit land area is the highest in OECD countries caused due to the fast development in economy, industrialization and urbanization in recent. Also, land area per person is just 2,072 m 2 . Landfill capacity reaches the upper limit, industrial waste generation is increasing. Searching new-renewable energy is vital to substitute fossil fuel considering its increasing price. Korea is the world's 10th biggest energy consuming country and 97% of energy depends on importing. Korea aims to increases supply of new-renewable energy by 5% until the 2011. In this study, we computed the amount of combustible waste from municipality generated by the multiple regression analysis. The existing technologies for converting waste to energy were surveyed and the technologies under development or utilizing in future were also investigated. Based on the technology utilization, the amount of energy using waste to energy technology could be estimated in future. (author)
O'Connor, Maebh; Dooley, Barbara; Fitzgerald, Amanda
2015-01-01
Suicide is a key concern among young adults. The aim of the study was to (1) construct a suicide risk index (SRI) based on demographic, situational, and behavioral factors known to be linked to suicidal behavior and (2) investigate whether the association between the SRI and suicidal behavior was mediated by proximal processes (personal factors, coping strategies, and emotional states). Participants consisted of 7,558 individuals aged 17-25 years (M = 20.35, SD = 1.91). Nearly 22% (n = 1,542) reported self-harm and 7% (n = 499) had attempted suicide. Mediation analysis revealed both a direct effect (ß = .299, 95% CI = [.281, .317], p suicidal behavior. The strongest mediators were levels of self-esteem, depression, and avoidant coping. Interventions to increase self-esteem, reduce depression, and encourage adaptive coping strategies may prevent suicidal behavior in young people.
Villalobos Pinto, Enrique; Sánchez-Bayle, Marciano
2017-12-01
Fever is a common cause of paediatric admissions in emergency departments. An aetiological diagnosis is difficult to obtain in those less than 3 months of age, as they tend to have a higher rate of serious bacterial infection (SBI). The aim of this study is to find a predictor index of SBI in children under 3 months old with fever of unknown origin. A study was conducted on all children under 3 months of age with fever admitted to hospital, with additional tests being performed according to the clinical protocol. Rochester criteria for identifying febrile infants at low risk for SBI were also analysed. A predictive model for SBI and positive cultures was designed, including the following variables in the maximum model: C-reactive protein (CRP), procalcitonin (PCT), and meeting not less than four of the Rochester criteria. A total of 702 subjects were included, of which 22.64% had an SBI and 20.65% had positive cultures. Children who had SBI and a positive culture showed higher values of white cells, total neutrophils, CRP and PCT. A statistical significance was observed with less than 4 Rochester criteria, CRP and PCT levels, an SBI (area under the curve [AUC] 0.877), or for positive cultures (AUC 0.888). Using regression analysis a predictive index was calculated for SBI or a positive culture, with a sensitivity of 87.7 and 91%, a specificity of 70.1 and 87.7%, an LR+ of 2.93 and 3.62, and a LR- of 0.17 and 0.10, respectively. The predictive models are valid and slightly improve the validity of the Rochester criteria for positive culture in children less than 3 months admitted with fever. Copyright © 2016 Asociación Española de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.
Javanbakht, Mehdi; Jamshidi, Ahmad Reza; Baradaran, Hamid Reza; Mohammadi, Zahra; Mashayekhi, Atefeh; Shokraneh, Farhad; Rezai Hamami, Mohsen; Yazdani Bakhsh, Raziyeh; Shabaninejad, Hossien; Delavari, Sajad; Tehrani, Arash
2018-05-01
Recent evidence from prospective cohort studies show a relationship between consumption of dairy foods and cardiovascular diseases (CVDs) and type 2 diabetes mellitus (T2DM). This association highlights the importance of dairy foods consumption in prevention of these diseases and also reduction of associated healthcare costs. The aim of this study was to estimate avoidable healthcare costs of CVD and T2D through adequate dairy foods consumption in Iran. This was a multistage modelling study. We conducted a systematic literature review in PubMed and EMBASE to identify any association between incidence of CVD and T2DM and dairy foods intake, and also associated relative risks. We obtained age- and sex-specific dairy foods consumption level and healthcare expenditures from national surveys and studies. Patient level simulation Markov models were constructed to predict the disease incidence, patient population size and associated healthcare costs for current and optimal dairy foods consumption at different time horizons (1, 5, 10 and 20 years). All parameters including costs and transition probabilities were defined as statistical distributions in the models, and all analyses were conducted by accounting for first and second order uncertainty. The systematic review results indicated that dairy foods consumption was inversely associated with incidence of T2DM, coronary heart disease (CHD) and stroke. We estimated that the introduction of a diet containing 3 servings of dairy foods per day may produce a $0.43 saving in annual per capita healthcare costs in Iran in the first year due to saving in cost of CVD and T2DM treatment. The estimated savings in per capita healthcare costs were $8.42, $39.97 and $190.25 in 5, 10 and 20-years' time, respectively. Corresponding total aggregated avoidable costs for the entire Iranian population within the study time horizons were $33.83, $661.31, $3,138.21 and $14,934.63 million, respectively. Our analysis demonstrated that increasing
International Nuclear Information System (INIS)
Deng, Zhongwei; Yang, Lin; Cai, Yishan; Deng, Hao; Sun, Liu
2016-01-01
The key technology of a battery management system is to online estimate the battery states accurately and robustly. For lithium iron phosphate battery, the relationship between state of charge and open circuit voltage has a plateau region which limits the estimation accuracy of voltage-based algorithms. The open circuit voltage hysteresis requires advanced online identification algorithms to cope with the strong nonlinear battery model. The available capacity, as a crucial parameter, contributes to the state of charge and state of health estimation of battery, but it is difficult to predict due to comprehensive influence by temperature, aging and current rates. Aim at above problems, the ampere-hour counting with current correction and the dual adaptive extended Kalman filter algorithms are combined to estimate model parameters and state of charge. This combination presents the advantages of less computation burden and more robustness. Considering the influence of temperature and degradation, the data-driven algorithm namely least squares support vector machine is implemented to predict the available capacity. The state estimation and capacity prediction methods are coupled to improve the estimation accuracy at different temperatures among the lifetime of battery. The experiment results verify the proposed methods have excellent state and available capacity estimation accuracy. - Highlights: • A dual adaptive extended Kalman filter is used to estimate parameters and states. • A correction term is introduced to consider the effect of current rates. • The least square support vector machine is used to predict the available capacity. • The experiment results verify the proposed state and capacity prediction methods.
Tice, Jeffrey A.; Cummings, Steven R.; Smith-Bindman, Rebecca; Ichikawa, Laura; Barlow, William E.; Kerlikowske, Karla
2009-01-01
Background Current models for assessing breast cancer risk are complex and do not include breast density, a strong risk factor for breast cancer that is routinely reported with mammography. Objective To develop and validate an easy-to-use breast cancer risk prediction model that includes breast density. Design Empirical model based on Surveillance, Epidemiology, and End Results incidence, and relative hazards from a prospective cohort. Setting Screening mammography sites participating in the Breast Cancer Surveillance Consortium. Patients 1 095 484 women undergoing mammography who had no previous diagnosis of breast cancer. Measurements Self-reported age, race or ethnicity, family history of breast cancer, and history of breast biopsy. Community radiologists rated breast density by using 4 Breast Imaging Reporting and Data System categories. Results During 5.3 years of follow-up, invasive breast cancer was diagnosed in 14 766 women. The breast density model was well calibrated overall (expected–observed ratio, 1.03 [95% CI, 0.99 to 1.06]) and in racial and ethnic subgroups. It had modest discriminatory accuracy (concordance index, 0.66 [CI, 0.65 to 0.67]). Women with low-density mammograms had 5-year risks less than 1.67% unless they had a family history of breast cancer and were older than age 65 years. Limitation The model has only modest ability to discriminate between women who will develop breast cancer and those who will not. Conclusion A breast cancer prediction model that incorporates routinely reported measures of breast density can estimate 5-year risk for invasive breast cancer. Its accuracy needs to be further evaluated in independent populations before it can be recommended for clinical use. PMID:18316752
Hatch, B B; Wood-Wentz, C M; Therneau, T M; Walker, M G; Payne, J M; Reeves, R K
2017-06-01
Retrospective chart review. To identify factors predictive of survival after spinal cord injury (SCI). Tertiary care institution. Multiple-variable Cox proportional hazards regression analysis for 759 patients with SCI (535 nontraumatic and 221 traumatic) included age, sex, completeness of injury, level of injury, functional independence measure (FIM) scores, rehabilitation length of stay and SCI cause. Estimated years of life lost in the decade after injury was calculated for patients vs uninjured controls. Median follow-up was 11.4 years. Population characteristics included paraplegia, 58%; complete injury, 11%; male sex, 64%; and median rehabilitation length of stay, 16 days. Factors independently predictive of decreased survival were increased age (+10 years; hazard ratio (HR (95% CI)), 1.6 (1.4-1.7)), male sex (1.3 (1.0-1.6)), lower dismissal FIM score (-10 points; 1.3 (1.2-1.3)) and all nontraumatic causes. Metastatic cancer had the largest decrease in survival (HR (95% CI), 13.3 (8.7-20.2)). Primary tumors (HR (95% CI), 2.5 (1.7-3.8)), vascular (2.5 (1.6-3.8)), musculoskeletal/stenosis (1.7 (1.2-2.5)) and other nontraumatic SCI (2.3 (1.5-3.6)) were associated with decreased survival. Ten-year survival was decreased in nontraumatic SCI (mean (s.d.), 1.8 (0.3) years lost), with largest decreases in survival for metastatic cancer and spinal cord ischemia. Age, male sex and lower dismissal FIM score were associated with decreased survival, but neither injury severity nor level was associated with it. Survival after SCI varies depending on SCI cause, with survival better after traumatic SCI than after nontraumatic SCI. Metastatic cancer and vascular ischemia were associated with the greatest survival reduction.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Stone, S. H.; Harvey, J.; Packman, A.; Worman, A.
2005-12-01
source of uncertainty in predicting hyporheic exchange resulted from the lack of detailed information on streambed hydraulic conductivity. We are currently conducting additional fieldwork to improve characterization of hydraulic conductivity and evaluate temporal changes in local stream morphology, and will relate these new measurements to the results of multiple prior solute injection experiments. These methods can potentially be used to provide both a priori, order-of-magnitude prediction of hyporheic exchange and much higher-quality estimates of long-term average behavior when used in conjunction with direct observations of solute transport. In the future we intend to test the generality of our method by applying the technique in other streams with varying geomorphology and flow conditions.
Costa, Dorcas Lamounier; Rocha, Regina Lunardi; Chaves, Eldo de Brito Ferreira; Batista, Vivianny Gonçalves de Vasconcelos; Costa, Henrique Lamounier; Costa, Carlos Henrique Nery
2016-01-01
Early identification of patients at higher risk of progressing to severe disease and death is crucial for implementing therapeutic and preventive measures; this could reduce the morbidity and mortality from kala-azar. We describe a score set composed of four scales in addition to software for quick assessment of the probability of death from kala-azar at the point of care. Data from 883 patients diagnosed between September 2005 and August 2008 were used to derive the score set, and data from 1,031 patients diagnosed between September 2008 and November 2013 were used to validate the models. Stepwise logistic regression analyses were used to derive the optimal multivariate prediction models. Model performance was assessed by its discriminatory accuracy. A computational specialist system (Kala-Cal(r)) was developed to speed up the calculation of the probability of death based on clinical scores. The clinical prediction score showed high discrimination (area under the curve [AUC] 0.90) for distinguishing death from survival for children ≤2 years old. Performance improved after adding laboratory variables (AUC 0.93). The clinical score showed equivalent discrimination (AUC 0.89) for older children and adults, which also improved after including laboratory data (AUC 0.92). The score set also showed a high, although lower, discrimination when applied to the validation cohort. This score set and Kala-Cal(r) software may help identify individuals with the greatest probability of death. The associated software may speed up the calculation of the probability of death based on clinical scores and assist physicians in decision-making.
Chen, Jian; Chen, Jie; Ding, Hong-Yan; Pan, Qin-Shi; Hong, Wan-Dong; Xu, Gang; Yu, Fang-You; Wang, Yu-Min
2015-01-01
The statistical methods to analyze and predict the related dangerous factors of deep fungal infection in lung cancer patients were several, such as logic regression analysis, meta-analysis, multivariate Cox proportional hazards model analysis, retrospective analysis, and so on, but the results are inconsistent. A total of 696 patients with lung cancer were enrolled. The factors were compared employing Student's t-test or the Mann-Whitney test or the Chi-square test and variables that were significantly related to the presence of deep fungal infection selected as candidates for input into the final artificial neural network analysis (ANN) model. The receiver operating characteristic (ROC) and area under curve (AUC) were used to evaluate the performance of the artificial neural network (ANN) model and logistic regression (LR) model. The prevalence of deep fungal infection from lung cancer in this entire study population was 32.04%(223/696), deep fungal infections occur in sputum specimens 44.05% (200/454). The ratio of candida albicans was 86.99% (194/223) in the total fungi. It was demonstrated that older (≥65 years), use of antibiotics, low serum albumin concentrations (≤37.18 g /L), radiotherapy, surgery, low hemoglobin hyperlipidemia (≤93.67 g /L), long time of hospitalization (≥14 days) were apt to deep fungal infection and the ANN model consisted of the seven factors. The AUC of ANN model (0.829±0.019) was higher than that of LR model (0.756±0.021). The artificial neural network model with variables consisting of age, use of antibiotics, serum albumin concentrations, received radiotherapy, received surgery, hemoglobin, time of hospitalization should be useful for predicting the deep fungal infection in lung cancer.
International Nuclear Information System (INIS)
Foote, P.S.
1999-01-01
The City of Harrisburg, Pennsylvania, proposed to construct a new low-head hydroelectric project on the Susquehanna River in the central part of the state in 1986, about 108 km upstream of the river mouth. As part of the licensing process, the city was required by the Federal Energy Regulatory Commission to carry out studies that would forecast the impacts on riverine aquatic habitat as a result of construction of the proposed 13 km long by 1.5 km wide reservoir. The methodology selected by the city and its consultants was to use the IFIM to model the habitat conditions in the project reach both before and after construction of the proposed reservoir.The IFIM is usually used to model instream flow releases downstream of dams and diversions, and had not been used before to model habitat conditions within the proposed reservoir area. The study team hydraulically modelled the project reach using existing hydraulic data, and a HEC-2 backwater analysis to determine post-project water surface elevations. The IFG-4 model was used to simulate both pre- and post-project water velocities, by distributing velocities across transects based on known discharges and cell depth. Effects on aquatic habitat were determined using the IFIM PHABSIM program, in which criteria for several evaluation species and life stages were used to yield estimates of Weighted Usable Area. The analysis showed, based on trends in WUA from pre- and post-project conditions, that habitat conditions would improve for several species and life stages, and would be negatively affected for fewer life stages and species. Some agency concerns that construction of the proposed reservoir would have significant adverse effects on the resident and anadromous fish populations were responded to using these results
DEFF Research Database (Denmark)
Bulti, Assaye; Briend, André; Dale, Nancy M
2017-01-01
Background: The burden of severe acute malnutrition (SAM) is estimated using unadjusted prevalence estimates. SAM is an acute condition and many children with SAM will either recover or die within a few weeks. Estimating SAM burden using unadjusted prevalence estimates results in significant...
International Nuclear Information System (INIS)
Christophe Poinssot; Etienne Tevissen; Jacques Ly; Michael Descostes; Virginie Blin; Catherine Beaucaire; Florence Goutelard; Christelle Latrille; Philippe Jacquier
2006-01-01
Full text of publication follows: Deep geological storage is studied in France as one of the three potential options for managing long lived nuclear waste in the framework of the 1991 Law. One of the key topics of research deals with the behaviour of radionuclides (RN) in the geological environment, focusing in particular on the retention at the solid/water interfaces (in the engineered barriers or within the host rock), the diffusion process within the rock as well as the coupling between chemistry and transport processes. The final aim is to develop validated and reliable long-term predictive migration models. These researches are mainly coping with Callovo-Oxfordian argilites and near field materials such as cement and bentonite. Research are dealing both with the near-field environment - which is characterised by its evolution with time in terms of temperature, Eh balance, water composition - and the far-field environment, the chemistry of which is assumed to be roughly constant. Modelling the global RN migration in geological disposal requires having models which are intrinsically able to account for the evolution of the physical and chemical conditions of the environment. From the standpoint of performance assessment it is then necessary to acquire thermodynamic descriptions of the retention processes in order to perform calculations of the reactive transport of radionuclides In a first approach, CEA is developing for more than 15 years experiments and modelling to derive reliable predictive models for the RN migration in the geological disposal. For this purpose, a specific approach entitled the Ion-Exchangers Theory IXT was developed. It allows first to characterise the intrinsic retention properties of the pure minerals, i.e. to get evidence about the mono or multi-site character of the surface, to quantify the site(s) concentration(s) and to study the relative affinity of major solutes generally present in natural waters. This work provided a broad data
Duda, D. P.; Khlopenkov, K. V.; Palikonda, R.; Khaiyer, M. M.; Minnis, P.; Su, W.; Sun-Mack, S.
2016-12-01
With the launch of the Deep Space Climate Observatory (DSCOVR), new estimates of the daytime Earth radiation budget can computed from a combination of measurements from the two Earth-observing sensors onboard the spacecraft, the Earth Polychromatic Imaging Camera (EPIC) and the National Institute of Standards and Technology Advanced Radiometer (NISTAR). Although these instruments can provide accurate top-of-atmosphere (TOA) radiance measurements, they lack sufficient resolution to provide details on small-scale surface and cloud properties. Previous studies have shown that these properties have a strong influence on the anisotropy of the radiation at the TOA, and ignoring such effects can result in large TOA-flux errors. To overcome these effects, high-resolution scene identification is needed for accurate Earth radiation budget estimation. Selected radiance and cloud property data measured and derived from several low earth orbit (LEO, including NASA Terra and Aqua MODIS, NOAA AVHRR) and geosynchronous (GEO, including GOES (east and west), METEOSAT, INSAT-3D, MTSAT-2, and HIMAWARI-8) satellite imagers were collected to create hourly 5-km resolution global composites of data necessary to compute angular distribution models (ADM) for reflected shortwave (SW) and longwave (LW) radiation. The satellite data provide an independent source of radiance measurements and scene identification information necessary to construct ADMs that are used to determine the daytime Earth radiation budget. To optimize spatial matching between EPIC measurements and the high-resolution composite cloud properties, LEO/GEO retrievals within the EPIC fields of view (FOV) are convolved to the EPIC point spread function (PSF) in a similar manner to the Clouds and the Earth's Radiant Energy System (CERES) Single Scanner Footprint TOA/Surface Fluxes and Clouds (SSF) product. Examples of the merged LEO/GEO/EPIC product will be presented, describing the chosen radiance and cloud properties and
Duda, David P.; Khlopenkov, Konstantin V.; Thiemann, Mandana; Palikonda, Rabindra; Sun-Mack, Sunny; Minnis, Patrick; Su, Wenying
2016-01-01
With the launch of the Deep Space Climate Observatory (DSCOVR), new estimates of the daytime Earth radiation budget can be computed from a combination of measurements from the two Earth-observing sensors onboard the spacecraft, the Earth Polychromatic Imaging Camera (EPIC) and the National Institute of Standards and Technology Advanced Radiometer (NISTAR). Although these instruments can provide accurate top-of-atmosphere (TOA) radiance measurements, they lack sufficient resolution to provide details on small-scale surface and cloud properties. Previous studies have shown that these properties have a strong influence on the anisotropy of the radiation at the TOA, and ignoring such effects can result in large TOA-flux errors. To overcome these effects, high-resolution scene identification is needed for accurate Earth radiation budget estimation. Selected radiance and cloud property data measured and derived from several low earth orbit (LEO, including NASA Terra and Aqua MODIS, NOAA AVHRR) and geosynchronous (GEO, including GOES (east and west), METEOSAT, INSAT-3D, MTSAT-2, and HIMAWARI-8) satellite imagers were collected to create hourly 5-km resolution global composites of data necessary to compute angular distribution models (ADM) for reflected shortwave (SW) and longwave (LW) radiation. The satellite data provide an independent source of radiance measurements and scene identification information necessary to construct ADMs that are used to determine the daytime Earth radiation budget. To optimize spatial matching between EPIC measurements and the high-resolution composite cloud properties, LEO/GEO retrievals within the EPIC fields of view (FOV) are convolved to the EPIC point spread function (PSF) in a similar manner to the Clouds and the Earth's Radiant Energy System (CERES) Single Scanner Footprint TOA/Surface Fluxes and Clouds (SSF) product. Examples of the merged LEO/GEO/EPIC product will be presented, describing the chosen radiance and cloud properties and
Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Todini, Ezio
2015-04-01
-Curve Model in Real Time (RCM-RT) (Barbetta and Moramarco, 2014) are used to this end. Both models without considering rainfall information explicitly considers, at each time of forecast, the estimate of lateral contribution along the river reach for which the stage forecast is performed at downstream end. The analysis is performed for several reaches using different lead times according to the channel length. Barbetta, S., Moramarco, T., Brocca, L., Franchini, M. and Melone, F. 2014. Confidence interval of real-time forecast stages provided by the STAFOM-RCM model: the case study of the Tiber River (Italy). Hydrological Processes, 28(3),729-743. Barbetta, S. and Moramarco, T. 2014. Real-time flood forecasting by relating local stage and remote discharge. Hydrological Sciences Journal, 59(9 ), 1656-1674. Coccia, G. and Todini, E. 2011. Recent developments in predictive uncertainty assessment based on the Model Conditional Processor approach. Hydrology and Earth System Sciences, 15, 3253-3274. doi:10.5194/hess-15-3253-2011. Krzysztofowicz, R. 1999. Bayesian theory of probabilistic forecasting via deterministic hydrologic model, Water Resour. Res., 35, 2739-2750. Todini, E. 2004. Role and treatment of uncertainty in real-time flood forecasting. Hydrological Processes 18(14), 2743_2746. Todini, E. 2008. A model conditional processor to assess predictive uncertainty in flood forecasting. Intl. J. River Basin Management, 6(2): 123-137.
Directory of Open Access Journals (Sweden)
Daehyun Kim
2015-11-01
Full Text Available We propose a state-of-charge (SOC estimation method for Li-ion batteries that combines a fuzzy sliding mode observer (FSMO with grey prediction. Unlike the existing methods based on a conventional first-order sliding mode observer (SMO and an adaptive gain SMO, the proposed method eliminates chattering in SOC estimation. In this method, which uses a fuzzy inference system, the gains of the SMO are adjusted according to the predicted future error and present estimation error of the terminal voltage. To forecast the future error value, a one-step-ahead terminal voltage prediction is obtained using a grey predictor. The proposed estimation method is validated through two types of discharge tests (a pulse discharge test and a random discharge test. The SOC estimation results are compared to the results of the conventional first-order SMO-based and the adaptive gain SMO-based methods. The experimental results show that the proposed method not only reduces chattering, but also improves estimation accuracy.
Tsunashima, Ryo; Naoi, Yasuto; Shimazu, Kenzo; Kagara, Naofumi; Shimoda, Masashi; Tanei, Tomonori; Miyake, Tomohiro; Kim, Seung Jin; Noguchi, Shinzaburo
2018-05-04
Prediction models for late (> 5 years) recurrence in ER-positive breast cancer need to be developed for the accurate selection of patients for extended hormonal therapy. We attempted to develop such a prediction model focusing on the differences in gene expression between breast cancers with early and late recurrence. For the training set, 779 ER-positive breast cancers treated with tamoxifen alone for 5 years were selected from the databases (GSE6532, GSE12093, GSE17705, and GSE26971). For the validation set, 221 ER-positive breast cancers treated with adjuvant hormonal therapy for 5 years with or without chemotherapy at our hospital were included. Gene expression was assayed by DNA microarray analysis (Affymetrix U133 plus 2.0). With the 42 genes differentially expressed in early and late recurrence breast cancers in the training set, a prediction model (42GC) for late recurrence was constructed. The patients classified by 42GC into the late recurrence-like group showed a significantly (P = 0.006) higher late recurrence rate as expected but a significantly (P = 1.62 × E-13) lower rate for early recurrence than non-late recurrence-like group. These observations were confirmed for the validation set, i.e., P = 0.020 for late recurrence and P = 5.70 × E-5 for early recurrence. We developed a unique prediction model (42GC) for late recurrence by focusing on the biological differences between breast cancers with early and late recurrence. Interestingly, patients in the late recurrence-like group by 42GC were at low risk for early recurrence.