NASA Software Cost Estimation Model: An Analogy Based Estimation Model
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...
Statistical Model-Based Face Pose Estimation
Institute of Scientific and Technical Information of China (English)
GE Xinliang; YANG Jie; LI Feng; WANG Huahua
2007-01-01
A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.
Model-based estimation for dynamic cardiac studies using ECT
International Nuclear Information System (INIS)
Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.
1994-01-01
In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed
Model-based estimation for dynamic cardiac studies using ECT.
Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O
1994-01-01
The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...
A Dynamic Travel Time Estimation Model Based on Connected Vehicles
Directory of Open Access Journals (Sweden)
Daxin Tian
2015-01-01
Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.
Optimal difference-based estimation for partially linear models
Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun
2017-01-01
Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.
Optimal difference-based estimation for partially linear models
Zhou, Yuejin
2017-12-16
Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.
Temporal validation for landsat-based volume estimation model
Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan
2015-01-01
Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...
Improved air ventilation rate estimation based on a statistical model
International Nuclear Information System (INIS)
Brabec, M.; Jilek, K.
2004-01-01
A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements
A model-based approach to estimating forest area
Ronald E. McRoberts
2006-01-01
A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.
Line impedance estimation using model based identification technique
DEFF Research Database (Denmark)
Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus
2011-01-01
The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...
ANFIS-Based Modeling for Photovoltaic Characteristics Estimation
Directory of Open Access Journals (Sweden)
Ziqiang Bi
2016-09-01
Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.
An Approach to Quality Estimation in Model-Based Development
DEFF Research Database (Denmark)
Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter
2004-01-01
We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...
Estimation of pump operational state with model-based methods
International Nuclear Information System (INIS)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
2010-01-01
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
Comparison of physically based catchment models for estimating Phosphorus losses
Nasr, Ahmed Elssidig; Bruen, Michael
2003-01-01
As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...
Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration
Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.
2017-04-01
Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
Small Area Model-Based Estimators Using Big Data Sources
Directory of Open Access Journals (Sweden)
Marchetti Stefano
2015-06-01
Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.
The relative pose estimation of aircraft based on contour model
Fu, Tai; Sun, Xiangyi
2017-02-01
This paper proposes a relative pose estimation approach based on object contour model. The first step is to obtain a two-dimensional (2D) projection of three-dimensional (3D)-model-based target, which will be divided into 40 forms by clustering and LDA analysis. Then we proceed by extracting the target contour in each image and computing their Pseudo-Zernike Moments (PZM), thus a model library is constructed in an offline mode. Next, we spot a projection contour that resembles the target silhouette most in the present image from the model library with reference of PZM; then similarity transformation parameters are generated as the shape context is applied to match the silhouette sampling location, from which the identification parameters of target can be further derived. Identification parameters are converted to relative pose parameters, in the premise that these values are the initial result calculated via iterative refinement algorithm, as the relative pose parameter is in the neighborhood of actual ones. At last, Distance Image Iterative Least Squares (DI-ILS) is employed to acquire the ultimate relative pose parameters.
Correlation between the model accuracy and model-based SOC estimation
International Nuclear Information System (INIS)
Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing
2017-01-01
State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.
Estimating cardiovascular disease incidence from prevalence: a spreadsheet based model
Directory of Open Access Journals (Sweden)
Xue Feng Hu
2017-01-01
Full Text Available Abstract Background Disease incidence and prevalence are both core indicators of population health. Incidence is generally not as readily accessible as prevalence. Cohort studies and electronic health record systems are two major way to estimate disease incidence. The former is time-consuming and expensive; the latter is not available in most developing countries. Alternatively, mathematical models could be used to estimate disease incidence from prevalence. Methods We proposed and validated a method to estimate the age-standardized incidence of cardiovascular disease (CVD, with prevalence data from successive surveys and mortality data from empirical studies. Hallett’s method designed for estimating HIV infections in Africa was modified to estimate the incidence of myocardial infarction (MI in the U.S. population and incidence of heart disease in the Canadian population. Results Model-derived estimates were in close agreement with observed incidence from cohort studies and population surveillance systems. This method correctly captured the trend in incidence given sufficient waves of cross-sectional surveys. The estimated MI declining rate in the U.S. population was in accordance with the literature. This method was superior to closed cohort, in terms of the estimating trend of population cardiovascular disease incidence. Conclusion It is possible to estimate CVD incidence accurately at the population level from cross-sectional prevalence data. This method has the potential to be used for age- and sex- specific incidence estimates, or to be expanded to other chronic conditions.
Model-Based Estimation of Ankle Joint Stiffness.
Misgeld, Berno J E; Zhang, Tony; Lüken, Markus J; Leonhardt, Steffen
2017-03-29
We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model's inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.
Model-Based Estimation of Ankle Joint Stiffness
Directory of Open Access Journals (Sweden)
Berno J. E. Misgeld
2017-03-01
Full Text Available We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.
Model-Based Estimation of Ankle Joint Stiffness
Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen
2017-01-01
We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683
Model-based state estimator for an intelligent tire
Goos, J.; Teerhuis, A. P.; Schmeitz, A. J.C.; Besselink, I.; Nijmeijer, H.
2017-01-01
In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into
Model-based State Estimator for an Intelligent Tire
Goos, J.; Teerhuis, A.P.; Schmeitz, A.J.C.; Besselink, I.J.M.; Nijmeijer, H.
2016-01-01
In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into
Sparse estimation of model-based diffuse thermal dust emission
Irfan, Melis O.; Bobin, Jérôme
2018-03-01
Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.
International Nuclear Information System (INIS)
Wei, Zhongbao; Zhao, Jiyun; Ji, Dongxu; Tseng, King Jet
2017-01-01
Highlights: •SOC and capacity are dually estimated with online adapted battery model. •Model identification and state dual estimate are fully decoupled. •Multiple timescales are used to improve estimation accuracy and stability. •The proposed method is verified with lab-scale experiments. •The proposed method is applicable to different battery chemistries. -- Abstract: Reliable online estimation of state of charge (SOC) and capacity is critically important for the battery management system (BMS). This paper presents a multi-timescale method for dual estimation of SOC and capacity with an online identified battery model. The model parameter estimator and the dual estimator are fully decoupled and executed with different timescales to improve the model accuracy and stability. Specifically, the model parameters are online adapted with the vector-type recursive least squares (VRLS) to address the different variation rates of them. Based on the online adapted battery model, the Kalman filter (KF)-based SOC estimator and RLS-based capacity estimator are formulated and integrated in the form of dual estimation. Experimental results suggest that the proposed method estimates the model parameters, SOC, and capacity in real time with fast convergence and high accuracy. Experiments on both lithium-ion battery and vanadium redox flow battery (VRB) verify the generality of the proposed method on multiple battery chemistries. The proposed method is also compared with other existing methods on the computational cost to reveal its superiority for practical application.
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2011-01-01
Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Feedback structure based entropy approach for multiple-model estimation
Institute of Scientific and Technical Information of China (English)
Shen-tu Han; Xue Anke; Guo Yunfei
2013-01-01
The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.
The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).
The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).
A new geometric-based model to accurately estimate arm and leg inertial estimates.
Wicke, Jason; Dumas, Geneviève A
2014-06-03
Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.
Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation
Segers, J.; van den Akker, R.; Werker, B.J.M.
2014-01-01
We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of
Model-based estimation of finite population total in stratified sampling
African Journals Online (AJOL)
The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...
Numerical Model based Reliability Estimation of Selective Laser Melting Process
DEFF Research Database (Denmark)
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
Response Surface Model (RSM)-based Benefit Per Ton Estimates
The tables below are updated versions of the tables appearing in The influence of location, source, and emission type in estimates of the human health benefits of reducing a ton of air pollution (Fann, Fulcher and Hubbell 2009).
A service based estimation method for MPSoC performance modelling
DEFF Research Database (Denmark)
Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand
2008-01-01
This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...
This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.
PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS
Directory of Open Access Journals (Sweden)
Y. Dehbi
2017-09-01
Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries
Directory of Open Access Journals (Sweden)
Zhongyue Zou
2014-08-01
Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.
Directory of Open Access Journals (Sweden)
Shem Kuyah
2016-02-01
Full Text Available The miombo woodland is the most extensive dry forest in the world, with the potential to store substantial amounts of biomass carbon. Efforts to obtain accurate estimates of carbon stocks in the miombo woodlands are limited by a general lack of biomass estimation models (BEMs. This study aimed to evaluate the accuracy of most commonly employed allometric models for estimating aboveground biomass (AGB in miombo woodlands, and to develop new models that enable more accurate estimation of biomass in the miombo woodlands. A generalizable mixed-species allometric model was developed from 88 trees belonging to 33 species ranging in diameter at breast height (DBH from 5 to 105 cm using Bayesian estimation. A power law model with DBH alone performed better than both a polynomial model with DBH and the square of DBH, and models including height and crown area as additional variables along with DBH. The accuracy of estimates from published models varied across different sites and trees of different diameter classes, and was lower than estimates from our model. The model developed in this study can be used to establish conservative carbon stocks required to determine avoided emissions in performance-based payment schemes, for example in afforestation and reforestation activities.
Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon
Nelson, Ross
2008-01-01
ICESat/GLAS waveform data are used to estimate biomass and carbon on a 1.27 million sq km study area. the Province of Quebec, Canada, below treeline. The same input data sets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include nonstratified and stratified versions of a multiple linear model where either biomass or (square root of) biomass serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial biomass estimates of up to 0.35 Gt (range 4.942+/-0.28 Gt to 5.29+/-0.36 Gt). The results suggest that if different predictive models are used to estimate regional carbon stocks in different epochs, e.g., y2005, y2015, one might mistakenly infer an apparent aboveground carbon "change" of, in this case, 0.18 Gt, or approximately 7% of the aboveground carbon in Quebec, due solely to the use of different predictive models. These findings argue for model consistency in future, LiDAR-based carbon monitoring programs. Regional biomass estimates from the four GLAS models are compared to ground estimates derived from an extensive network of 16,814 ground plots located in southern Quebec. Stratified models proved to be more accurate and precise than either of the two nonstratified models tested.
Using satellite-based rainfall estimates for streamflow modelling: Bagmati Basin
Shrestha, M.S.; Artan, Guleid A.; Bajracharya, S.R.; Sharma, R. R.
2008-01-01
In this study, we have described a hydrologic modelling system that uses satellite-based rainfall estimates and weather forecast data for the Bagmati River Basin of Nepal. The hydrologic model described is the US Geological Survey (USGS) Geospatial Stream Flow Model (GeoSFM). The GeoSFM is a spatially semidistributed, physically based hydrologic model. We have used the GeoSFM to estimate the streamflow of the Bagmati Basin at Pandhera Dovan hydrometric station. To determine the hydrologic connectivity, we have used the USGS Hydro1k DEM dataset. The model was forced by daily estimates of rainfall and evapotranspiration derived from weather model data. The rainfall estimates used for the modelling are those produced by the National Oceanic and Atmospheric Administration Climate Prediction Centre and observed at ground rain gauge stations. The model parameters were estimated from globally available soil and land cover datasets – the Digital Soil Map of the World by FAO and the USGS Global Land Cover dataset. The model predicted the daily streamflow at Pandhera Dovan gauging station. The comparison of the simulated and observed flows at Pandhera Dovan showed that the GeoSFM model performed well in simulating the flows of the Bagmati Basin.
Vision-based stress estimation model for steel frame structures with rigid links
Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan
2017-07-01
This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.
Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates
Energy Technology Data Exchange (ETDEWEB)
Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gotseff, Peter [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2013-12-01
This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear sky model performance.
International Nuclear Information System (INIS)
Lim, Young-Chul; Lee, Han-Seung; Noguchi, Takafumi
2009-01-01
This study aims to formulate a resistivity model whereby the concrete resistivity expressing the environment of steel reinforcement can be directly estimated and evaluated based on measurement immediately above reinforcement as a method of evaluating corrosion deterioration in reinforced concrete structures. It also aims to provide a theoretical ground for the feasibility of durability evaluation by electric non-destructive techniques with no need for chipping of cover concrete. This Resistivity Estimation Model (REM), which is a mathematical model using the mirror method, combines conventional four-electrode measurement of resistivity with geometric parameters including cover depth, bar diameter, and electrode intervals. This model was verified by estimation using this model at areas directly above reinforcement and resistivity measurement at areas unaffected by reinforcement in regard to the assessment of the concrete resistivity. Both results strongly correlated, proving the validity of this model. It is expected to be applicable to laboratory study and field diagnosis regarding reinforcement corrosion. (author)
Directory of Open Access Journals (Sweden)
Pengpeng Jiao
2014-01-01
Full Text Available The real-time traffic signal control for intersection requires dynamic turning movements as the basic input data. It is impossible to detect dynamic turning movements directly through current traffic surveillance systems, but dynamic origin-destination (O-D estimation can obtain it. However, the combined models of dynamic O-D estimation and real-time traffic signal control are rare in the literature. A framework for the multiobjective traffic signal control model for intersection based on dynamic O-D estimation (MSC-DODE is presented. A state-space model using Kalman filtering is first formulated to estimate the dynamic turning movements; then a revised sequential Kalman filtering algorithm is designed to solve the model, and the root mean square error and mean percentage error are used to evaluate the accuracy of estimated dynamic turning proportions. Furthermore, a multiobjective traffic signal control model is put forward to achieve real-time signal control parameters and evaluation indices. Finally, based on practical survey data, the evaluation indices from MSC-DODE are compared with those from Webster method. The actual and estimated turning movements are further input into MSC-DODE, respectively, and results are also compared. Case studies show that results of MSC-DODE are better than those of Webster method and are very close to unavailable actual values.
Using optical remote sensing model to estimate oil slick thickness based on satellite image
International Nuclear Information System (INIS)
Lu, Y C; Tian, Q J; Lyu, C G; Fu, W X; Han, W C
2014-01-01
An optical remote sensing model has been established based on two-beam interference theory to estimate marine oil slick thickness. Extinction coefficient and normalized reflectance of oil are two important parts in this model. Extinction coefficient is an important inherent optical property and will not vary with the background reflectance changed. Normalized reflectance can be used to eliminate the background differences between in situ measured spectra and remotely sensing image. Therefore, marine oil slick thickness and area can be estimated and mapped based on optical remotely sensing image and extinction coefficient
A case study to estimate costs using Neural Networks and regression based models
Directory of Open Access Journals (Sweden)
Nadia Bhuiyan
2012-07-01
Full Text Available Bombardier Aerospace’s high performance aircrafts and services set the utmost standard for the Aerospace industry. A case study in collaboration with Bombardier Aerospace is conducted in order to estimate the target cost of a landing gear. More precisely, the study uses both parametric model and neural network models to estimate the cost of main landing gears, a major aircraft commodity. A comparative analysis between the parametric based model and those upon neural networks model will be considered in order to determine the most accurate method to predict the cost of a main landing gear. Several trials are presented for the design and use of the neural network model. The analysis for the case under study shows the flexibility in the design of the neural network model. Furthermore, the performance of the neural network model is deemed superior to the parametric models for this case study.
An estimation framework for building information modeling (BIM)-based demolition waste by type.
Kim, Young-Chan; Hong, Won-Hwa; Park, Jae-Woo; Cha, Gi-Wook
2017-12-01
Most existing studies on demolition waste (DW) quantification do not have an official standard to estimate the amount and type of DW. Therefore, there are limitations in the existing literature for estimating DW with a consistent classification system. Building information modeling (BIM) is a technology that can generate and manage all the information required during the life cycle of a building, from design to demolition. Nevertheless, there has been a lack of research regarding its application to the demolition stage of a building. For an effective waste management plan, the estimation of the type and volume of DW should begin from the building design stage. However, the lack of tools hinders an early estimation. This study proposes a BIM-based framework that estimates DW in the early design stages, to achieve an effective and streamlined planning, processing, and management. Specifically, the input of construction materials in the Korean construction classification system and those in the BIM library were matched. Based on this matching integration, the estimates of DW by type were calculated by applying the weight/unit volume factors and the rates of DW volume change. To verify the framework, its operation was demonstrated by means of an actual BIM modeling and by comparing its results with those available in the literature. This study is expected to contribute not only to the estimation of DW at the building level, but also to the automated estimation of DW at the district level.
Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model
Shulgin, A.
2015-01-01
Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...
An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth
Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge
2017-01-01
A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.
Research on bathymetry estimation by Worldview-2 based with the semi-analytical model
Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.
2015-04-01
South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.
Satellite-based ET estimation using Landsat 8 images and SEBAL model
Directory of Open Access Journals (Sweden)
Bruno Bonemberger da Silva
Full Text Available ABSTRACT Estimation of evapotranspiration is a key factor to achieve sustainable water management in irrigated agriculture because it represents water use of crops. Satellite-based estimations provide advantages compared to direct methods as lysimeters especially when the objective is to calculate evapotranspiration at a regional scale. The present study aimed to estimate the actual evapotranspiration (ET at a regional scale, using Landsat 8 - OLI/TIRS images and complementary data collected from a weather station. SEBAL model was used in South-West Paraná, region composed of irrigated and dry agricultural areas, native vegetation and urban areas. Five Landsat 8 images, row 223 and path 78, DOY 336/2013, 19/2014, 35/2014, 131/2014 and 195/2014 were used, from which ET at daily scale was estimated as a residual of the surface energy balance to produce ET maps. The steps for obtain ET using SEBAL include radiometric calibration, calculation of the reflectance, surface albedo, vegetation indexes (NDVI, SAVI and LAI and emissivity. These parameters were obtained based on the reflective bands of the orbital sensor with temperature surface estimated from thermal band. The estimated ET values in agricultural areas, native vegetation and urban areas using SEBAL algorithm were compatible with those shown in the literature and ET errors between the ET estimates from SEBAL model and Penman Monteith FAO 56 equation were less than or equal to 1.00 mm day-1.
3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models
International Nuclear Information System (INIS)
Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D
2015-01-01
3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
A Web-Based Model to Estimate the Impact of Best Management Practices
Directory of Open Access Journals (Sweden)
Youn Shik Park
2014-03-01
Full Text Available The Spreadsheet Tool for the Estimation of Pollutant Load (STEPL can be used for Total Maximum Daily Load (TMDL processes, since the model is capable of simulating the impacts of various best management practices (BMPs and low impact development (LID practices. The model computes average annual direct runoff using the Soil Conservation Service Curve Number (SCS-CN method with average rainfall per event, which is not a typical use of the SCS-CN method. Five SCS-CN-based approaches to compute average annual direct runoff were investigated to explore estimated differences in average annual direct runoff computations using daily precipitation data collected from the National Climate Data Center and generated by the CLIGEN model for twelve stations in Indiana. Compared to the average annual direct runoff computed for the typical use of the SCS-CN method, the approaches to estimate average annual direct runoff within EPA STEPL showed large differences. A web-based model (STEPL WEB was developed with a corrected approach to estimate average annual direct runoff. Moreover, the model was integrated with the Web-based Load Duration Curve Tool, which identifies the least cost BMPs for each land use and optimizes BMP selection to identify the most cost-effective BMP implementations. The integrated tools provide an easy to use approach for performing TMDL analysis and identifying cost-effective approaches for controlling nonpoint source pollution.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.
Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf
2010-05-25
Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.
Parallel Factor-Based Model for Two-Dimensional Direction Estimation
Directory of Open Access Journals (Sweden)
Nizar Tayem
2017-01-01
Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.
Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.
Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li
2017-05-03
In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.
Directory of Open Access Journals (Sweden)
Peter Scarborough
2016-11-01
Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.
Developing a new solar radiation estimation model based on Buckingham theorem
Ekici, Can; Teke, Ismail
2018-06-01
While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.
Model-based estimation with boundary side information or boundary regularization
International Nuclear Information System (INIS)
Chiao, P.C.; Rogers, W.L.; Fessler, J.A.; Clinthorne, N.H.; Hero, A.O.
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (Emission Computed Tomography). The authors have also reported difficulties with boundary estimation in low contrast and low count rate situations. In this paper, the authors propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, the authors introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. The authors implement boundary regularization through formulating a penalized log-likelihood function. The authors also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
A novel Gaussian model based battery state estimation approach: State-of-Energy
International Nuclear Information System (INIS)
He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun
2015-01-01
Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets
Chi, Yulang; Zhang, Huanteng; Huang, Qiansheng; Lin, Yi; Ye, Guozhu; Zhu, Huimin; Dong, Sijun
2018-02-01
Environmental risks of organic chemicals have been greatly determined by their persistence, bioaccumulation, and toxicity (PBT) and physicochemical properties. Major regulations in different countries and regions identify chemicals according to their bioconcentration factor (BCF) and octanol-water partition coefficient (Kow), which frequently displays a substantial correlation with the sediment sorption coefficient (Koc). Half-life or degradability is crucial for the persistence evaluation of chemicals. Quantitative structure activity relationship (QSAR) estimation models are indispensable for predicting environmental fate and health effects in the absence of field- or laboratory-based data. In this study, 39 chemicals of high concern were chosen for half-life testing based on total organic carbon (TOC) degradation, and two widely accepted and highly used QSAR estimation models (i.e., EPI Suite and PBT Profiler) were adopted for environmental risk evaluation. The experimental results and estimated data, as well as the two model-based results were compared, based on the water solubility, Kow, Koc, BCF and half-life. Environmental risk assessment of the selected compounds was achieved by combining experimental data and estimation models. It was concluded that both EPI Suite and PBT Profiler were fairly accurate in measuring the physicochemical properties and degradation half-lives for water, soil, and sediment. However, the half-lives between the experimental and the estimated results were still not absolutely consistent. This suggests deficiencies of the prediction models in some ways, and the necessity to combine the experimental data and predicted results for the evaluation of environmental fate and risks of pollutants. Copyright © 2016. Published by Elsevier B.V.
Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model
Energy Technology Data Exchange (ETDEWEB)
Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Santos-Villalobos, Hector J [ORNL; Thompson, Joseph W [ORNL; Bolme, David S [ORNL; Boehnen, Chris Bensing [ORNL
2013-01-01
Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.
Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria
2015-12-01
Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model
DEFF Research Database (Denmark)
Spann, Robert; Roca, Christophe; Kold, David
2017-01-01
Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
Estimation of the applicability domain of kernel-based machine learning models for virtual screening
Directory of Open Access Journals (Sweden)
Fechner Nikolas
2010-03-01
Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening
Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas
2010-03-11
The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.
Dimensional Model for Estimating Factors influencing Childhood Obesity: Path Analysis Based Modeling
Directory of Open Access Journals (Sweden)
Maryam Kheirollahpour
2014-01-01
Full Text Available The main objective of this study is to identify and develop a comprehensive model which estimates and evaluates the overall relations among the factors that lead to weight gain in children by using structural equation modeling. The proposed models in this study explore the connection among the socioeconomic status of the family, parental feeding practice, and physical activity. Six structural models were tested to identify the direct and indirect relationship between the socioeconomic status and parental feeding practice general level of physical activity, and weight status of children. Finally, a comprehensive model was devised to show how these factors relate to each other as well as to the body mass index (BMI of the children simultaneously. Concerning the methodology of the current study, confirmatory factor analysis (CFA was applied to reveal the hidden (secondary effect of socioeconomic factors on feeding practice and ultimately on the weight status of the children and also to determine the degree of model fit. The comprehensive structural model tested in this study suggested that there are significant direct and indirect relationships among variables of interest. Moreover, the results suggest that parental feeding practice and physical activity are mediators in the structural model.
Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita
2017-05-01
Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.
Facial motion parameter estimation and error criteria in model-based image coding
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire
2016-01-01
This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...
Model-Based Load Estimation for Predictive Condition Monitoring of Wind Turbines
DEFF Research Database (Denmark)
Perisic, Nevena; Pederen, Bo Juul; Grunnet, Jacob Deleuran
signal is performed online, and a Load Indicator Signal (LIS) is formulated as a ratio between current estimated accumulated fatigue loads and its expected value based only on a priori knowledge (WTG dynamics and wind climate). LOT initialisation is based on a priori knowledge and can be obtained using...... programme for pre-maintenance actions. The performance of LOT is demonstrated by applying it to one of the most critical WTG components, the gearbox. Model-based load CMS for gearbox requires only standard WTG SCADA data. Direct measuring of gearbox fatigue loads requires high cost and low reliability...... measurement equipment. Thus, LOT can significantly reduce the price of load monitoring....
Improved regression models for ventilation estimation based on chest and abdomen movements
International Nuclear Information System (INIS)
Liu, Shaopeng; Gao, Robert; He, Qingbo; Staudenmayer, John; Freedson, Patty
2012-01-01
Non-invasive estimation of minute ventilation is important for quantifying the intensity of physical activity of individuals. In this paper, several improved regression models are presented, based on the measurement of chest and abdomen movements from sensor belts worn by subjects (n = 50) engaged in 14 types of physical activity. Five linear models involving a combination of 11 features were developed, and the effects of different model training approaches and window sizes for computing the features were investigated. The performance of the models was evaluated using experimental data collected during the physical activity protocol. The predicted minute ventilation was compared to the criterion ventilation measured using a bidirectional digital volume transducer housed in a respiratory gas exchange system. The results indicate that the inclusion of breathing frequency and the use of percentile points instead of interdecile ranges over a 60 s window size reduced error by about 43%, when applied to the classical two-degrees-of-freedom model. The mean percentage error of the minute ventilation estimated for all the activities was below 7.5%, verifying reasonably good performance of the models and the applicability of the wearable sensing system for minute ventilation estimation during physical activity. (paper)
Model methodology for estimating pesticide concentration extremes based on sparse monitoring data
Vecchia, Aldo V.
2018-03-22
This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.
Infant bone age estimation based on fibular shaft length: model development and clinical validation
International Nuclear Information System (INIS)
Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K.
2016-01-01
Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)
Infant bone age estimation based on fibular shaft length: model development and clinical validation
Energy Technology Data Exchange (ETDEWEB)
Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K. [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)
2016-03-15
Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)
Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data
Directory of Open Access Journals (Sweden)
Y. Tang
2015-01-01
Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.
Directory of Open Access Journals (Sweden)
Prashant K. Srivastava
2017-10-01
Full Text Available Reference Evapotranspiration (ETo and soil moisture deficit (SMD are vital for understanding the hydrological processes, particularly in the context of sustainable water use efficiency in the globe. Precise estimation of ETo and SMD are required for developing appropriate forecasting systems, in hydrological modeling and also in precision agriculture. In this study, the surface temperature downscaled from Weather Research and Forecasting (WRF model is used to estimate ETo using the boundary conditions that are provided by the European Center for Medium Range Weather Forecast (ECMWF. In order to understand the performance, the Hamon’s method is employed to estimate the ETo using the temperature from meteorological station and WRF derived variables. After estimating the ETo, a range of linear and non-linear models is utilized to retrieve SMD. The performance statistics such as RMSE, %Bias, and Nash Sutcliffe Efficiency (NSE indicates that the exponential model (RMSE = 0.226; %Bias = −0.077; NSE = 0.616 is efficient for SMD estimation by using the Observed ETo in comparison to the other linear and non-linear models (RMSE range = 0.019–0.667; %Bias range = 2.821–6.894; NSE = 0.013–0.419 used in this study. On the other hand, in the scenario where SMD is estimated using WRF downscaled meteorological variables based ETo, the linear model is found promising (RMSE = 0.017; %Bias = 5.280; NSE = 0.448 as compared to the non-linear models (RMSE range = 0.022–0.707; %Bias range = −0.207–−6.088; NSE range = 0.013–0.149. Our findings also suggest that all the models are performing better during the growing season (RMSE range = 0.024–0.025; %Bias range = −4.982–−3.431; r = 0.245–0.281 than the non−growing season (RMSE range = 0.011–0.12; %Bias range = 33.073–32.701; r = 0.161–0.244 for SMD estimation.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam
2011-01-01
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
[Estimation of forest canopy chlorophyll content based on PROSPECT and SAIL models].
Yang, Xi-guang; Fan, Wen-yi; Yu, Ying
2010-11-01
The forest canopy chlorophyll content directly reflects the health and stress of forest. The accurate estimation of the forest canopy chlorophyll content is a significant foundation for researching forest ecosystem cycle models. In the present paper, the inversion of the forest canopy chlorophyll content was based on PROSPECT and SAIL models from the physical mechanism angle. First, leaf spectrum and canopy spectrum were simulated by PROSPECT and SAIL models respectively. And leaf chlorophyll content look-up-table was established for leaf chlorophyll content retrieval. Then leaf chlorophyll content was converted into canopy chlorophyll content by Leaf Area Index (LAD). Finally, canopy chlorophyll content was estimated from Hyperion image. The results indicated that the main effect bands of chlorophyll content were 400-900 nm, the simulation of leaf and canopy spectrum by PROSPECT and SAIL models fit better with the measured spectrum with 7.06% and 16.49% relative error respectively, the RMSE of LAI inversion was 0. 542 6 and the forest canopy chlorophyll content was estimated better by PROSPECT and SAIL models with precision = 77.02%.
Estimation of the Diesel Particulate Filter Soot Load Based on an Equivalent Circuit Model
Directory of Open Access Journals (Sweden)
Yanting Du
2018-02-01
Full Text Available In order to estimate the diesel particulate filter (DPF soot load and improve the accuracy of regeneration timing, a novel method based on an equivalent circuit model is proposed based on the electric-fluid analogy. This proposed method can reduce the impact of the engine transient operation on the soot load, accurately calculate the flow resistance, and improve the estimation accuracy of the soot load. Firstly, the least square method is used to identify the flow resistance based on the World Harmonized Transient Cycle (WHTC test data, and the relationship between flow resistance, exhaust temperature and soot load is established. Secondly, the online estimation of the soot load is achieved by using the dual extended Kalman filter (DEKF. The results show that this method has good convergence and robustness with the maximal absolute error of 0.2 g/L at regeneration timing, which can meet engineering requirements. Additionally, this method can estimate the soot load under engine transient operating conditions and avoids a large number of experimental tests, extensive calibration and the analysis of complex chemical reactions required in traditional methods.
Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior
Yang, Jianfei; Poot, Dirk H. J.; Caan, Matthan W. A.; Su, Tanja; Majoie, Charles B. L. M.; van Vliet, Lucas J.; Vos, Frans M.
2016-01-01
Purpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD). This data-acquisition prior is based on the Fisher information matrix and enables the assessment whether two tensors are mandatory to describe the data. The method is compared to Maximum Likelihood Estimation (MLE) of the dual tensor model and to FSL’s ball-and-stick approach. Results Monte Carlo experiments demonstrated that JARD’s volume fractions correlated well with the ground truth for single and crossing fiber configurations. In single fiber configurations JARD automatically reduced the volume fraction of one compartment to (almost) zero. The variance in fractional anisotropy (FA) of the main tensor component was thereby reduced compared to MLE. JARD and MLE gave a comparable outcome in data simulating crossing fibers. On brain data, JARD yielded a smaller spread in FA along the corpus callosum compared to MLE. Tract-based spatial statistics demonstrated a higher sensitivity in detecting age-related white matter atrophy using JARD compared to both MLE and the ball-and-stick approach. Conclusions The proposed framework offers accurate and precise estimation of diffusion properties in single and dual fiber regions. PMID:27760166
Uncertainties in neural network model based on carbon dioxide concentration for occupancy estimation
Energy Technology Data Exchange (ETDEWEB)
Alam, Azimil Gani; Rahman, Haolia; Kim, Jung-Kyung; Han, Hwataik [Kookmin University, Seoul (Korea, Republic of)
2017-05-15
Demand control ventilation is employed to save energy by adjusting airflow rate according to the ventilation load of a building. This paper investigates a method for occupancy estimation by using a dynamic neural network model based on carbon dioxide concentration in an occupied zone. The method can be applied to most commercial and residential buildings where human effluents to be ventilated. An indoor simulation program CONTAMW is used to generate indoor CO{sub 2} data corresponding to various occupancy schedules and airflow patterns to train neural network models. Coefficients of variation are obtained depending on the complexities of the physical parameters as well as the system parameters of neural networks, such as the numbers of hidden neurons and tapped delay lines. We intend to identify the uncertainties caused by the model parameters themselves, by excluding uncertainties in input data inherent in measurement. Our results show estimation accuracy is highly influenced by the frequency of occupancy variation but not significantly influenced by fluctuation in the airflow rate. Furthermore, we discuss the applicability and validity of the present method based on passive environmental conditions for estimating occupancy in a room from the viewpoint of demand control ventilation applications.
Design of Model-based Controller with Disturbance Estimation in Steer-by-wire System
Directory of Open Access Journals (Sweden)
Jung Sanghun
2018-01-01
Full Text Available The steer-by-wire system is a next generation steering control technology that has been actively studied because it has many advantages such as fast response, space efficiency due to removal of redundant mechanical elements, and high connectivity with vehicle chassis control, such as active steering. Steer-by-wire system has disturbance composed of tire friction torque and self-aligning torque. These disturbances vary widely due to the weight or friction coefficient change. Therefore, disturbance compensation logic is strongly required to obtain desired performance. This paper proposes model-based controller with disturbance compensation to achieve the robust control performance. Targeted steer-by-wire system is identified through the experiment and system identification method. Moreover, model-based controller is designed using the identified plant model. Disturbance of targeted steer-by-wire is estimated using disturbance observer(DOB, and compensate the estimated disturbance into control input. Experiment of various scenarios are conducted to validate the robust performance of proposed model-based controller.
Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation
Ekin Aydin, Boran; Rutten, Martine
2016-04-01
Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.
Structural observability analysis and EKF based parameter estimation of building heating models
Directory of Open Access Journals (Sweden)
D.W.U. Perera
2016-07-01
Full Text Available Research for enhanced energy-efficient buildings has been given much recognition in the recent years owing to their high energy consumptions. Increasing energy needs can be precisely controlled by practicing advanced controllers for building Heating, Ventilation, and Air-Conditioning (HVAC systems. Advanced controllers require a mathematical building heating model to operate, and these models need to be accurate and computationally efficient. One main concern associated with such models is the accurate estimation of the unknown model parameters. This paper presents the feasibility of implementing a simplified building heating model and the computation of physical parameters using an off-line approach. Structural observability analysis is conducted using graph-theoretic techniques to analyze the observability of the developed system model. Then Extended Kalman Filter (EKF algorithm is utilized for parameter estimates using the real measurements of a single-zone building. The simulation-based results confirm that even with a simple model, the EKF follows the state variables accurately. The predicted parameters vary depending on the inputs and disturbances.
A Modelling Framework for estimating Road Segment Based On-Board Vehicle Emissions
International Nuclear Information System (INIS)
Lin-Jun, Yu; Ya-Lan, Liu; Yu-Huan, Ren; Zhong-Ren, Peng; Meng, Liu Meng
2014-01-01
Traditional traffic emission inventory models aim to provide overall emissions at regional level which cannot meet planners' demand for detailed and accurate traffic emissions information at the road segment level. Therefore, a road segment-based emission model for estimating light duty vehicle emissions is proposed, where floating car technology is used to collect information of traffic condition of roads. The employed analysis framework consists of three major modules: the Average Speed and the Average Acceleration Module (ASAAM), the Traffic Flow Estimation Module (TFEM) and the Traffic Emission Module (TEM). The ASAAM is used to obtain the average speed and the average acceleration of the fleet on each road segment using FCD. The TFEM is designed to estimate the traffic flow of each road segment in a given period, based on the speed-flow relationship and traffic flow spatial distribution. Finally, the TEM estimates emissions from each road segment, based on the results of previous two modules. Hourly on-road light-duty vehicle emissions for each road segment in Shenzhen's traffic network are obtained using this analysis framework. The temporal-spatial distribution patterns of the pollutant emissions of road segments are also summarized. The results show high emission road segments cluster in several important regions in Shenzhen. Also, road segments emit more emissions during rush hours than other periods. The presented case study demonstrates that the proposed approach is feasible and easy-to-use to help planners make informed decisions by providing detailed road segment-based emission information
CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach
Energy Technology Data Exchange (ETDEWEB)
Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-08-04
Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.
Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.
2018-03-01
A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.
An adaptive neuro fuzzy model for estimating the reliability of component-based software systems
Directory of Open Access Journals (Sweden)
Kirti Tyagi
2014-01-01
Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.
Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling
Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.
2015-01-01
Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless
BRISENT: An Entropy-Based Model for Bridge-Pier Scour Estimation under Complex Hydraulic Scenarios
Directory of Open Access Journals (Sweden)
Alonso Pizarro
2017-11-01
Full Text Available The goal of this paper is to introduce the first clear-water scour model based on both the informational entropy concept and the principle of maximum entropy, showing that a variational approach is ideal for describing erosional processes under complex situations. The proposed bridge–pier scour entropic (BRISENT model is capable of reproducing the main dynamics of scour depth evolution under steady hydraulic conditions, step-wise hydrographs, and flood waves. For the calibration process, 266 clear-water scour experiments from 20 precedent studies were considered, where the dimensionless parameters varied widely. Simple formulations are proposed to estimate BRISENT’s fitting coefficients, in which the ratio between pier-diameter and sediment-size was the most critical physical characteristic controlling scour model parametrization. A validation process considering highly unsteady and multi-peaked hydrographs was carried out, showing that the proposed BRISENT model reproduces scour evolution with high accuracy.
Biomechanical model-based displacement estimation in micro-sensor motion capture
International Nuclear Information System (INIS)
Meng, X L; Sun, S Y; Wu, J K; Zhang, Z Q; 3 Building, 21 Heng Mui Keng Terrace (Singapore))" data-affiliation=" (Department of Electrical and Computer Engineering, National University of Singapore (NUS), 02-02-10 I3 Building, 21 Heng Mui Keng Terrace (Singapore))" >Wong, W C
2012-01-01
In micro-sensor motion capture systems, the estimation of the body displacement in the global coordinate system remains a challenge due to lack of external references. This paper proposes a self-contained displacement estimation method based on a human biomechanical model to track the position of walking subjects in the global coordinate system without any additional supporting infrastructures. The proposed approach makes use of the biomechanics of the lower body segments and the assumption that during walking there is always at least one foot in contact with the ground. The ground contact joint is detected based on walking gait characteristics and used as the external references of the human body. The relative positions of the other joints are obtained from hierarchical transformations based on the biomechanical model. Anatomical constraints are proposed to apply to some specific joints of the lower body to further improve the accuracy of the algorithm. Performance of the proposed algorithm is compared with an optical motion capture system. The method is also demonstrated in outdoor and indoor long distance walking scenarios. The experimental results demonstrate clearly that the biomechanical model improves the displacement accuracy within the proposed framework. (paper)
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Are individual based models a suitable approach to estimate population vulnerability? - a case study
Directory of Open Access Journals (Sweden)
Eva Maria Griebeler
2011-04-01
Full Text Available European populations of the Large Blue Butterfly Maculinea arion have experienced severe declines in the last decades, especially in the northern part of the species range. This endangered lycaenid butterfly needs two resources for development: flower buds of specific plants (Thymus spp., Origanum vulgare, on which young caterpillars briefly feed, and red ants of the genus Myrmica, whose nests support caterpillars during a prolonged final instar. I present an analytically solvable deterministic model to estimate the vulnerability of populations of M. arion. Results obtained from the sensitivity analysis of this mathematical model (MM are contrasted to the respective results that had been derived from a spatially explicit individual based model (IBM for this butterfly. I demonstrate that details in landscape configuration which are neglected by the MM but are easily taken into consideration by the IBM result in a different degree of intraspecific competition of caterpillars on flower buds and within host ant nests. The resulting differences in mortalities of caterpillars lead to erroneous estimates of the extinction risk of a butterfly population living in habitat with low food plant coverage and low abundance in host ant nests. This observation favors the use of an individual based modeling approach over the deterministic approach at least for the management of this threatened butterfly.
WALS estimation and forecasting in factor-based dynamic models with an application to Armenia
Poghosyan, K.; Magnus, J.R.
2012-01-01
Two model averaging approaches are used and compared in estimating and forecasting dynamic factor models, the well-known Bayesian model averaging (BMA) and the recently developed weighted average least squares (WALS). Both methods propose to combine frequentist estimators using Bayesian weights. We
Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5
Directory of Open Access Journals (Sweden)
D. Wang
2017-07-01
Full Text Available Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015 proposed a method incorporating satellite-based evapotranspiration (ET products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5 and test its performance over the conterminous US (CONUS. We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015 method and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.
WALS Estimation and Forecasting in Factor-based Dynamic Models with an Application to Armenia
Poghosyan, Karen; Magnus, Jan R.
2012-01-01
Two model averaging approaches are used and compared in estimating and forecasting dynamic factor models, the well-known Bayesian model averaging (BMA) and the recently developed weighted average least squares (WALS). Both methods propose to combine frequentist estimators using Bayesian weights. We apply our framework to the Armenian economy using quarterly data from 20002010, and we estimate and forecast real GDP growth and inflation.
Remaining useful life estimation based on stochastic deterioration models: A comparative study
International Nuclear Information System (INIS)
Le Son, Khanh; Fouladirad, Mitra; Barros, Anne; Levrat, Eric; Iung, Benoît
2013-01-01
Prognostic of system lifetime is a basic requirement for condition-based maintenance in many application domains where safety, reliability, and availability are considered of first importance. This paper presents a probabilistic method for prognostic applied to the 2008 PHM Conference Challenge data. A stochastic process (Wiener process) combined with a data analysis method (Principal Component Analysis) is proposed to model the deterioration of the components and to estimate the RUL on a case study. The advantages of our probabilistic approach are pointed out and a comparison with existing results on the same data is made
Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries
Perez, Hector Eduardo
This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the
Farooqui, Habib; Jit, Mark; Heymann, David L; Zodpey, Sanjay
2015-01-01
The burden of severe pneumonia in terms of morbidity and mortality is unknown in India especially at sub-national level. In this context, we aimed to estimate the number of severe pneumonia episodes, pneumococcal pneumonia episodes and pneumonia deaths in children younger than 5 years in 2010. We adapted and parameterized a mathematical model based on the epidemiological concept of potential impact fraction developed CHERG for this analysis. The key parameters that determine the distribution of severe pneumonia episode across Indian states were state-specific under-5 population, state-specific prevalence of selected definite pneumonia risk factors and meta-estimates of relative risks for each of these risk factors. We applied the incidence estimates and attributable fraction of risk factors to population estimates for 2010 of each Indian state. We then estimated the number of pneumococcal pneumonia cases by applying the vaccine probe methodology to an existing trial. We estimated mortality due to severe pneumonia and pneumococcal pneumonia by combining incidence estimates with case fatality ratios from multi-centric hospital-based studies. Our results suggest that in 2010, 3.6 million (3.3-3.9 million) episodes of severe pneumonia and 0.35 million (0.31-0.40 million) all cause pneumonia deaths occurred in children younger than 5 years in India. The states that merit special mention include Uttar Pradesh where 18.1% children reside but contribute 24% of pneumonia cases and 26% pneumonia deaths, Bihar (11.3% children, 16% cases, 22% deaths) Madhya Pradesh (6.6% children, 9% cases, 12% deaths), and Rajasthan (6.6% children, 8% cases, 11% deaths). Further, we estimated that 0.56 million (0.49-0.64 million) severe episodes of pneumococcal pneumonia and 105 thousand (92-119 thousand) pneumococcal deaths occurred in India. The top contributors to India's pneumococcal pneumonia burden were Uttar Pradesh, Bihar, Madhya Pradesh and Rajasthan in that order. Our results
Directory of Open Access Journals (Sweden)
Habib Farooqui
Full Text Available The burden of severe pneumonia in terms of morbidity and mortality is unknown in India especially at sub-national level. In this context, we aimed to estimate the number of severe pneumonia episodes, pneumococcal pneumonia episodes and pneumonia deaths in children younger than 5 years in 2010. We adapted and parameterized a mathematical model based on the epidemiological concept of potential impact fraction developed CHERG for this analysis. The key parameters that determine the distribution of severe pneumonia episode across Indian states were state-specific under-5 population, state-specific prevalence of selected definite pneumonia risk factors and meta-estimates of relative risks for each of these risk factors. We applied the incidence estimates and attributable fraction of risk factors to population estimates for 2010 of each Indian state. We then estimated the number of pneumococcal pneumonia cases by applying the vaccine probe methodology to an existing trial. We estimated mortality due to severe pneumonia and pneumococcal pneumonia by combining incidence estimates with case fatality ratios from multi-centric hospital-based studies. Our results suggest that in 2010, 3.6 million (3.3-3.9 million episodes of severe pneumonia and 0.35 million (0.31-0.40 million all cause pneumonia deaths occurred in children younger than 5 years in India. The states that merit special mention include Uttar Pradesh where 18.1% children reside but contribute 24% of pneumonia cases and 26% pneumonia deaths, Bihar (11.3% children, 16% cases, 22% deaths Madhya Pradesh (6.6% children, 9% cases, 12% deaths, and Rajasthan (6.6% children, 8% cases, 11% deaths. Further, we estimated that 0.56 million (0.49-0.64 million severe episodes of pneumococcal pneumonia and 105 thousand (92-119 thousand pneumococcal deaths occurred in India. The top contributors to India's pneumococcal pneumonia burden were Uttar Pradesh, Bihar, Madhya Pradesh and Rajasthan in that order. Our
Farooqui, Habib; Jit, Mark; Heymann, David L.; Zodpey, Sanjay
2015-01-01
The burden of severe pneumonia in terms of morbidity and mortality is unknown in India especially at sub-national level. In this context, we aimed to estimate the number of severe pneumonia episodes, pneumococcal pneumonia episodes and pneumonia deaths in children younger than 5 years in 2010. We adapted and parameterized a mathematical model based on the epidemiological concept of potential impact fraction developed CHERG for this analysis. The key parameters that determine the distribution of severe pneumonia episode across Indian states were state-specific under-5 population, state-specific prevalence of selected definite pneumonia risk factors and meta-estimates of relative risks for each of these risk factors. We applied the incidence estimates and attributable fraction of risk factors to population estimates for 2010 of each Indian state. We then estimated the number of pneumococcal pneumonia cases by applying the vaccine probe methodology to an existing trial. We estimated mortality due to severe pneumonia and pneumococcal pneumonia by combining incidence estimates with case fatality ratios from multi-centric hospital-based studies. Our results suggest that in 2010, 3.6 million (3.3–3.9 million) episodes of severe pneumonia and 0.35 million (0.31–0.40 million) all cause pneumonia deaths occurred in children younger than 5 years in India. The states that merit special mention include Uttar Pradesh where 18.1% children reside but contribute 24% of pneumonia cases and 26% pneumonia deaths, Bihar (11.3% children, 16% cases, 22% deaths) Madhya Pradesh (6.6% children, 9% cases, 12% deaths), and Rajasthan (6.6% children, 8% cases, 11% deaths). Further, we estimated that 0.56 million (0.49–0.64 million) severe episodes of pneumococcal pneumonia and 105 thousand (92–119 thousand) pneumococcal deaths occurred in India. The top contributors to India’s pneumococcal pneumonia burden were Uttar Pradesh, Bihar, Madhya Pradesh and Rajasthan in that order. Our
Markov models for digraph panel data : Monte Carlo-based derivative estimation
Schweinberger, Michael; Snijders, Tom A. B.
2007-01-01
A parametric, continuous-time Markov model for digraph panel data is considered. The parameter is estimated by the method of moments. A convenient method for estimating the variance-covariance matrix of the moment estimator relies on the delta method, requiring the Jacobian matrix-that is, the
Ben Slama, Amine; Mouelhi, Aymen; Sahli, Hanene; Manoubi, Sondes; Mbarek, Chiraz; Trabelsi, Hedi; Fnaiech, Farhat; Sayadi, Mounir
2017-07-01
The diagnostic of the vestibular neuritis (VN) presents many difficulties to traditional assessment methods This paper deals with a fully automatic VN diagnostic system based on nystagmus parameter estimation using a pupil detection algorithm. A geodesic active contour model is implemented to find an accurate segmentation region of the pupil. Hence, the novelty of the proposed algorithm is to speed up the standard segmentation by using a specific mask located on the region of interest. This allows a drastically computing time reduction and a great performance and accuracy of the obtained results. After using this fast segmentation algorithm, the obtained estimated parameters are represented in temporal and frequency settings. A useful principal component analysis (PCA) selection procedure is then applied to obtain a reduced number of estimated parameters which are used to train a multi neural network (MNN). Experimental results on 90 eye movement videos show the effectiveness and the accuracy of the proposed estimation algorithm versus previous work. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
Directory of Open Access Journals (Sweden)
Konstantinos B. Baltzis
2008-10-01
Full Text Available A common assumption in cellular communications is the circular-cell approximation. In this paper, an alternative analysis based on the hexagonal shape of the cells is presented. A geometrical-based stochastic model is proposed to describe the angle of arrival of the interfering signals in the reverse link of a cellular system. Explicit closed form expressions are derived, and simulations performed exhibit the characteristics and validate the accuracy of the proposed model. Applications in the capacity estimation of WCDMA cellular networks are presented. Dependence of system capacity of the sectorization of the cells and the base station antenna radiation pattern is explored. Comparisons with data in literature validate the accuracy of the proposed model. The degree of error of the hexagonal and the circular-cell approaches has been investigated indicating the validity of the proposed model. Results have also shown that, in many cases, the two approaches give similar results when the radius of the circle equals to the hexagon inradius. A brief discussion on how the proposed technique may be applied to broadband access networks is finally made.
WALS estimation and forecasting in factor-based dynamic models with an application to Armenia
Poghosyan, K.; Magnus, J.R.
2011-01-01
Two model averaging approaches are used and compared in estimating and forecasting dynamic factor models, the well-known BMA and the recently developed WALS. Both methods propose to combine frequentist estimators using Bayesian weights. We apply our framework to the Armenian economy using quarterly
A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.
Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey
1998-01-01
Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)
Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten
2016-05-01
Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is
Directory of Open Access Journals (Sweden)
Corrie H. Allen
2016-05-01
Full Text Available Background. Preserving connectivity, or the ability of a landscape to support species movement, is among the most commonly recommended strategies to reduce the negative effects of climate change and human land use development on species. Connectivity analyses have traditionally used a corridor-based approach and rely heavily on least cost path modeling and circuit theory to delineate corridors. Individual-based models are gaining popularity as a potentially more ecologically realistic method of estimating landscape connectivity. However, this remains a relatively unexplored approach. We sought to explore the utility of a simple, individual-based model as a land-use management support tool in identifying and implementing landscape connectivity. Methods. We created an individual-based model of bighorn sheep (Ovis canadensis that simulates a bighorn sheep traversing a landscape by following simple movement rules. The model was calibrated for bighorn sheep in the Okanagan Valley, British Columbia, Canada, a region containing isolated herds that are vital to conservation of the species in its northern range. Simulations were run to determine baseline connectivity between subpopulations in the study area. We then applied the model to explore two land management scenarios on simulated connectivity: restoring natural fire regimes and identifying appropriate sites for interventions that would increase road permeability for bighorn sheep. Results. This model suggests there are no continuous areas of good habitat between current subpopulations of sheep in the study area; however, a series of stepping-stones or circuitous routes could facilitate movement between subpopulations and into currently unoccupied, yet suitable, bighorn habitat. Restoring natural fire regimes or mimicking fire with prescribed burns and tree removal could considerably increase bighorn connectivity in this area. Moreover, several key road crossing sites that could benefit from
Chatterji, Gano
2011-01-01
Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.
International Nuclear Information System (INIS)
Wei, Zhongbao; Meng, Shujuan; Xiong, Binyu; Ji, Dongxu; Tseng, King Jet
2016-01-01
Highlights: • Integrated online model identification and SOC estimate is explored. • Noise variances are online estimated in a data-driven way. • Identification bias caused by noise corruption is attenuated. • SOC is online estimated with high accuracy and fast convergence. • Algorithm comparison shows the superiority of proposed method. - Abstract: State of charge (SOC) estimators with online identified battery model have proven to have high accuracy and better robustness due to the timely adaption of time varying model parameters. In this paper, we show that the common methods for model identification are intrinsically biased if both the current and voltage sensors are corrupted with noises. The uncertainties in battery model further degrade the accuracy and robustness of SOC estimate. To address this problem, this paper proposes a novel technique which integrates the Frisch scheme based bias compensating recursive least squares (FBCRLS) with a SOC observer for enhanced model identification and SOC estimate. The proposed method online estimates the noise statistics and compensates the noise effect so that the model parameters can be extracted without bias. The SOC is further estimated in real time with the online updated and unbiased battery model. Simulation and experimental studies show that the proposed FBCRLS based observer effectively attenuates the bias on model identification caused by noise contamination and as a consequence provides more reliable estimate on SOC. The proposed method is also compared with other existing methods to highlight its superiority in terms of accuracy and convergence speed.
Samat, N. A.; Ma'arof, S. H. Mohd Imam
2015-05-01
Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.
U.S. Environmental Protection Agency — Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression...
Development of Web-Based RECESS Model for Estimating Baseflow Using SWAT
Directory of Open Access Journals (Sweden)
Gwanjae Lee
2014-04-01
Full Text Available Groundwater has received increasing attention as an important strategic water resource for adaptation to climate change. In this regard, the separation of baseflow from streamflow and the analysis of recession curves make a significant contribution to integrated river basin management. The United States Geological Survey (USGS RECESS model adopting the master-recession curve (MRC method can enhance the accuracy with which baseflow may be separated from streamflow, compared to other baseflow-separation schemes that are more limited in their ability to reflect various watershed/aquifer characteristics. The RECESS model has been widely used for the analysis of hydrographs, but the applications using RECESS were only available through Microsoft-Disk Operating System (MS-DOS. Thus, this study aims to develop a web-based RECESS model for easy separation of baseflow from streamflow, with easy applications for ungauged regions. RECESS on the web derived the alpha factor, which is a baseflow recession constant in the Soil Water Assessment Tool (SWAT, and this variable was provided to SWAT as the input. The results showed that the alpha factor estimated from the web-based RECESS model improved the predictions of streamflow and recession. Furthermore, these findings showed that the baseflow characteristics of the ungauged watersheds were influenced by the land use and slope angle of watersheds, as well as by precipitation and streamflow.
State-space dynamic model for estimation of radon entry rate, based on Kalman filtering
International Nuclear Information System (INIS)
Brabec, Marek; Jilek, Karel
2007-01-01
To predict the radon concentration in a house environment and to understand the role of all factors affecting its behavior, it is necessary to recognize time variation in both air exchange rate and radon entry rate into a house. This paper describes a new approach to the separation of their effects, which effectively allows continuous estimation of both radon entry rate and air exchange rate from simultaneous tracer gas (carbon monoxide) and radon gas measurement data. It is based on a state-space statistical model which permits quick and efficient calculations. Underlying computations are based on (extended) Kalman filtering, whose practical software implementation is easy. Key property is the model's flexibility, so that it can be easily adjusted to handle various artificial regimens of both radon gas and CO gas level manipulation. After introducing the statistical model formally, its performance will be demonstrated on real data from measurements conducted in our experimental, naturally ventilated and unoccupied room. To verify our method, radon entry rate calculated via proposed statistical model was compared with its known reference value. The results from several days of measurement indicated fairly good agreement (up to 5% between reference value radon entry rate and its value calculated continuously via proposed method, in average). Measured radon concentration moved around the level approximately 600 Bq m -3 , whereas the range of air exchange rate was 0.3-0.8 (h -1 )
Xin, X.; Li, F.; Peng, Z.; Qinhuo, L.
2017-12-01
Land surface heterogeneities significantly affect the reliability and accuracy of remotely sensed evapotranspiration (ET), and it gets worse for lower resolution data. At the same time, temporal scale extrapolation of the instantaneous latent heat flux (LE) at satellite overpass time to daily ET are crucial for applications of such remote sensing product. The purpose of this paper is to propose a simple but efficient model for estimating daytime evapotranspiration considering heterogeneity of mixed pixels. In order to do so, an equation to calculate evapotranspiration fraction (EF) of mixed pixels was derived based on two key assumptions. Assumption 1: the available energy (AE) of each sub-pixel equals approximately to that of any other sub-pixels in the same mixed pixel within acceptable margin of bias, and as same as the AE of the mixed pixel. It's only for a simpification of the equation, and its uncertainties and resulted errors in estimated ET are very small. Assumption 2: EF of each sub-pixel equals to the EF of the nearest pure pixel(s) of same land cover type. This equation is supposed to be capable of correcting the spatial scale error of the mixed pixels EF and can be used to calculated daily ET with daily AE data.The model was applied to an artificial oasis in the midstream of Heihe River. HJ-1B satellite data were used to estimate the lumped fluxes at the scale of 300 m after resampling the 30-m resolution datasets to 300 m resolution, which was used to carry on the key step of the model. The results before and after correction were compare to each other and validated using site data of eddy-correlation systems. Results indicated that the new model is capable of improving accuracy of daily ET estimation relative to the lumped method. Validations at 12 sites of eddy-correlation systems for 9 days of HJ-1B overpass showed that the R² increased to 0.82 from 0.62; the RMSE decreased to 1.60 MJ/m² from 2.47MJ/m²; the MBE decreased from 1.92 MJ/m² to 1
An estimator of the survival function based on the semi-Markov model under dependent censorship.
Lee, Seung-Yeoun; Tsai, Wei-Yann
2005-06-01
Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.
Potocki, J K; Tharp, H S
1993-01-01
The success of treating cancerous tissue with heat depends on the temperature elevation, the amount of tissue elevated to that temperature, and the length of time that the tissue temperature is elevated. In clinical situations the temperature of most of the treated tissue volume is unknown, because only a small number of temperature sensors can be inserted into the tissue. A state space model based on a finite difference approximation of the bioheat transfer equation (BHTE) is developed for identification purposes. A full-order extended Kalman filter (EKF) is designed to estimate both the unknown blood perfusion parameters and the temperature at unmeasured locations. Two reduced-order estimators are designed as computationally less intensive alternatives to the full-order EKF. Simulation results show that the success of the estimation scheme depends strongly on the number and location of the temperature sensors. Superior results occur when a temperature sensor exists in each unknown blood perfusion zone, and the number of sensors is at least as large as the number of unknown perfusion zones. Unacceptable results occur when there are more unknown perfusion parameters than temperature sensors, or when the sensors are placed in locations that do not sample the unknown perfusion information.
A Systematic Evaluation of Ultrasound-based Fetal Weight Estimation Models on Indian Population
Directory of Open Access Journals (Sweden)
Sujitkumar S. Hiwale
2017-12-01
Conclusion: We found that the existing fetal weight estimation models have high systematic and random errors on Indian population, with a general tendency of overestimation of fetal weight in the LBW category and underestimation in the HBW category. We also observed that these models have a limited ability to predict babies at a risk of either low or high birth weight. It is recommended that the clinicians should consider all these factors, while interpreting estimated weight given by the existing models.
On the economic benefit of utility based estimation of a volatility model
Adam Clements; Annastiina Silvennoinen
2009-01-01
Forecasts of asset return volatility are necessary for many financial applications, including portfolio allocation. Traditionally, the parameters of econometric models used to generate volatility forecasts are estimated in a statistical setting and subsequently used in an economic setting such as portfolio allocation. Differences in the criteria under which the model is estimated and applied may inhibit reduce the overall economic benefit of a model in the context of portfolio allocation. Thi...
Directory of Open Access Journals (Sweden)
Isabel C. Pérez Hoyos
2016-04-01
Full Text Available Groundwater Dependent Ecosystems (GDEs are increasingly threatened by humans’ rising demand for water resources. Consequently, it is imperative to identify the location of GDEs to protect them. This paper develops a methodology to identify the probability of an ecosystem to be groundwater dependent. Probabilities are obtained by modeling the relationship between the known locations of GDEs and factors influencing groundwater dependence, namely water table depth and climatic aridity index. Probabilities are derived for the state of Nevada, USA, using modeled water table depth and aridity index values obtained from the Global Aridity database. The model selected results from the performance comparison of classification trees (CT and random forests (RF. Based on a threshold-independent accuracy measure, RF has a better ability to generate probability estimates. Considering a threshold that minimizes the misclassification rate for each model, RF also proves to be more accurate. Regarding training accuracy, performance measures such as accuracy, sensitivity, and specificity are higher for RF. For the test set, higher values of accuracy and kappa for CT highlight the fact that these measures are greatly affected by low prevalence. As shown for RF, the choice of the cutoff probability value has important consequences on model accuracy and the overall proportion of locations where GDEs are found.
Estimation model for evaporative emissions from gasoline vehicles based on thermodynamics.
Hata, Hiroo; Yamada, Hiroyuki; Kokuryo, Kazuo; Okada, Megumi; Funakubo, Chikage; Tonokura, Kenichi
2018-03-15
In this study, we conducted seven-day diurnal breathing loss (DBL) tests on gasoline vehicles. We propose a model based on the theory of thermodynamics that can represent the experimental results of the current and previous studies. The experiments were performed using 14 physical parameters to determine the dependence of total emissions on temperature, fuel tank fill, and fuel vapor pressure. In most cases, total emissions after an apparent breakthrough were proportional to the difference between minimum and maximum environmental temperatures during the day, fuel tank empty space, and fuel vapor pressure. Volatile organic compounds (VOCs) were measured using a Gas Chromatography Mass Spectrometer and Flame Ionization Detector (GC-MS/FID) to determine the Ozone Formation Potential (OFP) of after-breakthrough gas emitted to the atmosphere. Using the experimental results, we constructed a thermodynamic model for estimating the amount of evaporative emissions after a fully saturated canister breakthrough occurred, and a comparison between the thermodynamic model and previous models was made. Finally, the total annual evaporative emissions and OFP in Japan were determined and compared by each model. Copyright © 2017 Elsevier B.V. All rights reserved.
Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi; Bliss, Norman B.; Young, Claudia J.; West, Tristram O.; Ogle, Stephen M.
2014-01-01
Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m−2 yr−1and total NPP in the range of 318–490 Tg C yr−1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m−2 yr−1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m−2 yr−1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. We suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.
REMOTE-SENSING-BASED BIOPHYSICAL MODELS FOR ESTIMATING LAI OF IRRIGATED CROPS IN MURRY DARLING BASIN
Directory of Open Access Journals (Sweden)
I. Wittamperuma
2012-07-01
Full Text Available Remote sensing is a rapid and reliable method for estimating crop growth data from individual plant to crops in irrigated agriculture ecosystem. The LAI is one of the important biophysical parameter for determining vegetation health, biomass, photosynthesis and evapotranspiration (ET for the modelling of crop yield and water productivity. Ground measurement of this parameter is tedious and time-consuming due to heterogeneity across the landscape over time and space. This study deals with the development of remote-sensing based empirical relationships for the estimation of ground-based LAI (LAIG using NDVI, modelled with and without atmospheric correction models for three irrigated crops (corn, wheat and rice grown in irrigated farms within Coleambally Irrigation Area (CIA which is located in southern Murray Darling basin, NSW in Australia. Extensive ground truthing campaigns were carried out to measure crop growth and to collect field samples of LAI using LAI- 2000 Plant Canopy Analyser and reflectance using CROPSCAN Multi Spectral Radiometer at several farms within the CIA. A Set of 12 cloud free Landsat 5 TM satellite images for the period of 2010-11 were downloaded and regression analysis was carried out to analyse the co-relationships between satellite and ground measured reflectance and to check the reliability of data sets for the crops. Among all the developed regression relationships between LAI and NDVI, the atmospheric correction process has significantly improved the relationship between LAI and NDVI for Landsat 5 TM images. The regression analysis also shows strong correlations for corn and wheat but weak correlations for rice which is currently being investigated.
A new adaptive control scheme based on the interacting multiple model (IMM) estimation
International Nuclear Information System (INIS)
Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid
2016-01-01
In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.
Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon
Nelson, Ross F.
2010-01-01
Ice, Cloud, and land Elevation Satellite (ICESat) / Geosciences Laser Altimeter System (GLAS) waveform data are used to estimate biomass and carbon on a 1.27 X 10(exp 6) square km study area in the Province of Quebec, Canada, below the tree line. The same input datasets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include non-stratified and stratified versions of a multiple linear model where either biomass or (biomass)(exp 0.5) serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial dry biomass estimates of up to 0.35 G, with a range of 4.94 +/- 0.28 Gt to 5.29 +/-0.36 Gt. The differences among model estimates are statistically non-significant, however, and the results demonstrate the degree to which carbon estimates vary strictly as a function of the model used to estimate regional biomass. Results also indicate that GLAS measurements become problematic with respect to height and biomass retrievals in the boreal forest when biomass values fall below 20 t/ha and when GLAS 75th percentile heights fall below 7 m.
Zhu, Yanjie; Peng, Xi; Wu, Yin; Wu, Ed X; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2017-02-01
To develop a new model-based method with spatial and parametric constraints (MB-SPC) aimed at accelerating diffusion tensor imaging (DTI) by directly estimating the diffusion tensor from highly undersampled k-space data. The MB-SPC method effectively incorporates the prior information on the joint sparsity of different diffusion-weighted images using an L1-L2 norm and the smoothness of the diffusion tensor using a total variation seminorm. The undersampled k-space datasets were obtained from fully sampled DTI datasets of a simulated phantom and an ex-vivo experimental rat heart with acceleration factors ranging from 2 to 4. The diffusion tensor was directly reconstructed by solving a minimization problem with a nonlinear conjugate gradient descent algorithm. The reconstruction performance was quantitatively assessed using the normalized root mean square error (nRMSE) of the DTI indices. The MB-SPC method achieves acceptable DTI measures at an acceleration factor up to 4. Experimental results demonstrate that the proposed method can estimate the diffusion tensor more accurately than most existing methods operating at higher net acceleration factors. The proposed method can significantly reduce artifact, particularly at higher acceleration factors or lower SNRs. This method can easily be adapted to MR relaxometry parameter mapping and is thus useful in the characterization of biological tissue such as nerves, muscle, and heart tissue. © 2016 American Association of Physicists in Medicine.
Directory of Open Access Journals (Sweden)
R. Hu
2018-04-01
Full Text Available Impervious surface area and vegetation coverage are important biophysical indicators of urban surface features which can be derived from medium-resolution images. However, remote sensing data obtained by a single sensor are easily affected by many factors such as weather conditions, and the spatial and temporal resolution can not meet the needs for soil erosion estimation. Therefore, the integrated multi-source remote sensing data are needed to carry out high spatio-temporal resolution vegetation coverage estimation. Two spatial and temporal vegetation coverage data and impervious data were obtained from MODIS and Landsat 8 remote sensing images. Based on the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM, the vegetation coverage data of two scales were fused and the data of vegetation coverage fusion (ESTARFM FVC and impervious layer with high spatiotemporal resolution (30 m, 8 day were obtained. On this basis, the spatial variability of the seepage-free surface and the vegetation cover landscape in the study area was measured by means of statistics and spatial autocorrelation analysis. The results showed that: 1 ESTARFM FVC and impermeable surface have higher accuracy and can characterize the characteristics of the biophysical components covered by the earth's surface; 2 The average impervious surface proportion and the spatial configuration of each area are different, which are affected by natural conditions and urbanization. In the urban area of Xi'an, which has typical characteristics of spontaneous urbanization, landscapes are fragmented and have less spatial dependence.
Hu, Rongming; Wang, Shu; Guo, Jiao; Guo, Liankun
2018-04-01
Impervious surface area and vegetation coverage are important biophysical indicators of urban surface features which can be derived from medium-resolution images. However, remote sensing data obtained by a single sensor are easily affected by many factors such as weather conditions, and the spatial and temporal resolution can not meet the needs for soil erosion estimation. Therefore, the integrated multi-source remote sensing data are needed to carry out high spatio-temporal resolution vegetation coverage estimation. Two spatial and temporal vegetation coverage data and impervious data were obtained from MODIS and Landsat 8 remote sensing images. Based on the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), the vegetation coverage data of two scales were fused and the data of vegetation coverage fusion (ESTARFM FVC) and impervious layer with high spatiotemporal resolution (30 m, 8 day) were obtained. On this basis, the spatial variability of the seepage-free surface and the vegetation cover landscape in the study area was measured by means of statistics and spatial autocorrelation analysis. The results showed that: 1) ESTARFM FVC and impermeable surface have higher accuracy and can characterize the characteristics of the biophysical components covered by the earth's surface; 2) The average impervious surface proportion and the spatial configuration of each area are different, which are affected by natural conditions and urbanization. In the urban area of Xi'an, which has typical characteristics of spontaneous urbanization, landscapes are fragmented and have less spatial dependence.
Ebrahimian, Hossein; Jalayer, Fatemeh
2017-08-29
In the immediate aftermath of a strong earthquake and in the presence of an ongoing aftershock sequence, scientific advisories in terms of seismicity forecasts play quite a crucial role in emergency decision-making and risk mitigation. Epidemic Type Aftershock Sequence (ETAS) models are frequently used for forecasting the spatio-temporal evolution of seismicity in the short-term. We propose robust forecasting of seismicity based on ETAS model, by exploiting the link between Bayesian inference and Markov Chain Monte Carlo Simulation. The methodology considers the uncertainty not only in the model parameters, conditioned on the available catalogue of events occurred before the forecasting interval, but also the uncertainty in the sequence of events that are going to happen during the forecasting interval. We demonstrate the methodology by retrospective early forecasting of seismicity associated with the 2016 Amatrice seismic sequence activities in central Italy. We provide robust spatio-temporal short-term seismicity forecasts with various time intervals in the first few days elapsed after each of the three main events within the sequence, which can predict the seismicity within plus/minus two standard deviations from the mean estimate within the few hours elapsed after the main event.
Nijland, L.; Arentze, T.; Timmermans, H.
2013-01-01
Although several activity-based models made the transition to practice in recent years, modeling dynamic activity generation and especially, the mechanisms underlying activity generation are not well incorporated in the current activity-based models. For instance, current models assume that
Nijland, E.W.L.; Arentze, T.A.; Timmermans, H.J.P.
2011-01-01
Although several activity-based models made the transition to practice in recent years, modelling dynamic activity generation and especially, the mechanisms underlying activity generation are not well incorporated in the current activity-based models. For example, current models assume that
Recession-based hydrological models for estimating low flows in ungauged catchments in the Himalayas
Directory of Open Access Journals (Sweden)
H. G. Rees
2004-01-01
Full Text Available The Himalayan region of Nepal and northern India experiences hydrological extremes from monsoonal floods during July to September, when most of the annual precipitation falls, to periods of very low flows during the dry season (December to February. While the monsoon floods cause acute disasters such as loss of human life and property, mudslides and infrastructure damage, the lack of water during the dry season has a chronic impact on the lives of local people. The management of water resources in the region is hampered by relatively sparse hydrometerological networks and consequently, many resource assessments are required in catchments where no measurements exist. A hydrological model for estimating dry season flows in ungauged catchments, based on recession curve behaviour, has been developed to address this problem. Observed flows were fitted to a second order storage model to enable average annual recession behaviour to be examined. Regionalised models were developed, using a calibration set of 26 catchments, to predict three recession curve parameters: the storage constant; the initial recession flow and the start date of the recession. Relationships were identified between: the storage constant and catchment area; the initial recession flow and elevation (acting as a surrogate for rainfall; and the start date of the recession and geographic location. An independent set of 13 catchments was used to evaluate the robustness of the models. The regional models predicted the average volume of water in an annual recession period (1st of October to the 1st of February with an average error of 8%, while mid-January flows were predicted to within ±50% for 79% of the catchments in the data set. Keywords: Himalaya, recession curve, water resources, ungauged catchment, regionalisation, low flows
Recession-based hydrological models for estimating low flows in ungauged catchments in the Himalayas
Rees, H. G.; Holmes, M. G. R.; Young, A. R.; Kansakar, S. R.
The Himalayan region of Nepal and northern India experiences hydrological extremes from monsoonal floods during July to September, when most of the annual precipitation falls, to periods of very low flows during the dry season (December to February). While the monsoon floods cause acute disasters such as loss of human life and property, mudslides and infrastructure damage, the lack of water during the dry season has a chronic impact on the lives of local people. The management of water resources in the region is hampered by relatively sparse hydrometerological networks and consequently, many resource assessments are required in catchments where no measurements exist. A hydrological model for estimating dry season flows in ungauged catchments, based on recession curve behaviour, has been developed to address this problem. Observed flows were fitted to a second order storage model to enable average annual recession behaviour to be examined. Regionalised models were developed, using a calibration set of 26 catchments, to predict three recession curve parameters: the storage constant; the initial recession flow and the start date of the recession. Relationships were identified between: the storage constant and catchment area; the initial recession flow and elevation (acting as a surrogate for rainfall); and the start date of the recession and geographic location. An independent set of 13 catchments was used to evaluate the robustness of the models. The regional models predicted the average volume of water in an annual recession period (1st of October to the 1st of February) with an average error of 8%, while mid-January flows were predicted to within ±50% for 79% of the catchments in the data set.
A rapid estimation of tsunami run-up based on finite fault models
Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.
2014-12-01
Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.
A dynamic programming approach for quickly estimating large network-based MEV models
DEFF Research Database (Denmark)
Mai, Tien; Frejinger, Emma; Fosgerau, Mogens
2017-01-01
We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....
International Nuclear Information System (INIS)
Xu Long; Wang Junping; Chen Quanshi
2012-01-01
Highlights: ► A novel extended Kalman Filtering SOC estimation method based on a stochastic fuzzy neural network (SFNN) battery model is proposed. ► The SFNN which has filtering effect on noisy input can model the battery nonlinear dynamic with high accuracy. ► A robust parameter learning algorithm for SFNN is studied so that the parameters can converge to its true value with noisy data. ► The maximum SOC estimation error based on the proposed method is 0.6%. - Abstract: Extended Kalman filtering is an intelligent and optimal means for estimating the state of a dynamic system. In order to use extended Kalman filtering to estimate the state of charge (SOC), we require a mathematical model that can accurately capture the dynamics of battery pack. In this paper, we propose a stochastic fuzzy neural network (SFNN) instead of the traditional neural network that has filtering effect on noisy input to model the battery nonlinear dynamic. Then, the paper studies the extended Kalman filtering SOC estimation method based on a SFNN model. The modeling test is realized on an 80 Ah Ni/MH battery pack and the Federal Urban Driving Schedule (FUDS) cycle is used to verify the SOC estimation method. The maximum SOC estimation error is 0.6% compared with the real SOC obtained from the discharging test.
Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model
Directory of Open Access Journals (Sweden)
Jing-Huai Gao
2009-12-01
Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.
Mitchell, Jade; Arnot, Jon A; Jolliet, Olivier; Georgopoulos, Panos G; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A; Vallero, Daniel A
2013-08-01
While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA's need to develop novel approaches and tools for rapidly prioritizing chemicals, a "Challenge" was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA's effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. Copyright © 2013 Elsevier B.V. All rights reserved.
Mitchell, Jade; Arnot, Jon A.; Jolliet, Olivier; Georgopoulos, Panos G.; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A.; Vallero, Daniel A.
2014-01-01
While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA’s need to develop novel approaches and tools for rapidly prioritizing chemicals, a “Challenge” was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA’s effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. PMID:23707726
Odman, M. T.; Hu, Y.; Russell, A. G.
2016-12-01
Prescribed burning is practiced throughout the US, and most widely in the Southeast, for the purpose of maintaining and improving the ecosystem, and reducing the wildfire risk. However, prescribed burn emissions contribute significantly to the of trace gas and particulate matter loads in the atmosphere. In places where air quality is already stressed by other anthropogenic emissions, prescribed burns can lead to major health and environmental problems. Air quality modeling efforts are under way to assess the impacts of prescribed burn emissions. Operational forecasts of the impacts are also emerging for use in dynamic management of air quality as well as the burns. Unfortunately, large uncertainties exist in the process of estimating prescribed burn emissions and these uncertainties limit the accuracy of the burn impact predictions. Prescribed burn emissions are estimated by using either ground-based information or satellite observations. When there is sufficient local information about the burn area, the types of fuels, their consumption amounts, and the progression of the fire, ground-based estimates are more accurate. In the absence of such information satellites remain as the only reliable source for emission estimation. To determine the level of uncertainty in prescribed burn emissions, we compared estimates derived from a burn permit database and other ground-based information to the estimates by the Biomass Burning Emissions Product derived from a constellation of NOAA and NASA satellites. Using these emissions estimates we conducted simulations with the Community Multiscale Air Quality (CMAQ) model and predicted trace gas and particulate matter concentrations throughout the Southeast for two consecutive burn seasons (2015 and 2016). In this presentation, we will compare model predicted concentrations to measurements at monitoring stations and evaluate if the differences are commensurate with our emission uncertainty estimates. We will also investigate if
Energy Technology Data Exchange (ETDEWEB)
Bruschewski, Martin; Schiffer, Heinz-Peter [Technische Universitaet Darmstadt, Institute of Gas Turbines and Aerospace Propulsion, Darmstadt (Germany); Freudenhammer, Daniel [Technische Universitaet Darmstadt, Institute of Fluid Mechanics and Aerodynamics, Center of Smart Interfaces, Darmstadt (Germany); Buchenberg, Waltraud B. [University Medical Center Freiburg, Medical Physics, Department of Radiology, Freiburg (Germany); Grundmann, Sven [University of Rostock, Institute of Fluid Mechanics, Rostock (Germany)
2016-05-15
Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75% is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented. (orig.)
Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven
2016-05-01
Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.
Model based estimation of sediment erosion in groyne fields along the River Elbe
International Nuclear Information System (INIS)
Prohaska, Sandra; Jancke, Thomas; Westrich, Bernhard
2008-01-01
River water quality is still a vital environmental issue, even though ongoing emissions of contaminants are being reduced in several European rivers. The mobility of historically contaminated deposits is key issue in sediment management strategy and remediation planning. Resuspension of contaminated sediments impacts the water quality and thus, it is important for river engineering and ecological rehabilitation. The erodibility of the sediments and associated contaminants is difficult to predict due to complex time depended physical, chemical, and biological processes, as well as due to the lack of information. Therefore, in engineering practice the values for erosion parameters are usually assumed to be constant despite their high spatial and temporal variability, which leads to a large uncertainty of the erosion parameters. The goal of presented study is to compare the deterministic approach assuming constant critical erosion shear stress and an innovative approach which takes the critical erosion shear stress as a random variable. Furthermore, quantification of the effective value of the critical erosion shear stress, its applicability in numerical models, and erosion probability will be estimated. The results presented here are based on field measurements and numerical modelling of the River Elbe groyne fields.
A method for state of energy estimation of lithium-ion batteries based on neural network model
International Nuclear Information System (INIS)
Dong, Guangzhong; Zhang, Xu; Zhang, Chenbin; Chen, Zonghai
2015-01-01
The state-of-energy is an important evaluation index for energy optimization and management of power battery systems in electric vehicles. Unlike the state-of-charge which represents the residual energy of the battery in traditional applications, state-of-energy is integral result of battery power, which is the product of current and terminal voltage. On the other hand, like state-of-charge, the state-of-energy has an effect on terminal voltage. Therefore, it is hard to solve the nonlinear problems between state-of-energy and terminal voltage, which will complicate the estimation of a battery's state-of-energy. To address this issue, a method based on wavelet-neural-network-based battery model and particle filter estimator is presented for the state-of-energy estimation. The wavelet-neural-network based battery model is used to simulate the entire dynamic electrical characteristics of batteries. The temperature and discharge rate are also taken into account to improve model accuracy. Besides, in order to suppress the measurement noises of current and voltage, a particle filter estimator is applied to estimate cell state-of-energy. Experimental results on LiFePO_4 batteries indicate that the wavelet-neural-network based battery model simulates battery dynamics robustly with high accuracy and the estimation value based on the particle filter estimator converges to the real state-of-energy within an error of ±4%. - Highlights: • State-of-charge is replaced by state-of-energy to determine cells residual energy. • The battery state-space model is established based on a neural network. • Temperature and current influence are considered to improve the model accuracy. • The particle filter is used for state-of-energy estimation to improve accuracy. • The robustness of new method is validated under dynamic experimental conditions.
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
Model-Based Estimation of Collision Risks of Predatory Birds with Wind Turbines
Directory of Open Access Journals (Sweden)
Marcus Eichhorn
2012-06-01
Full Text Available The expansion of renewable energies, such as wind power, is a promising way of mitigating climate change. Because of the risk of collision with rotor blades, wind turbines have negative effects on local bird populations, particularly on raptors such as the Red Kite (Milvus milvus. Appropriate assessment tools for these effects have been lacking. To close this gap, we have developed an agent-based, spatially explicit model that simulates the foraging behavior of the Red Kite around its aerie in a landscape consisting of different land-use types. We determined the collision risk of the Red Kite with the turbine as a function of the distance between the wind turbine and the aerie and other parameters. The impact function comprises the synergistic effects of species-specific foraging behavior and landscape structure. The collision risk declines exponentially with increasing distance. The strength of this decline depends on the raptor's foraging behavior, its ability to avoid wind turbines, and the mean wind speed in the region. The collision risks, which are estimated by the simulation model, are in the range of values observed in the field. The derived impact function shows that the collision risk can be described as an aggregated function of distance between the wind turbine and the raptor's aerie. This allows an easy and rapid assessment of the ecological impacts of (existing or planned wind turbines in relation to their spatial location. Furthermore, it implies that minimum buffer zones for different landscapes can be determined in a defensible way. This modeling approach can be extended to other bird species with central-place foraging behavior. It provides a helpful tool for landscape planning aimed at minimizing the impacts of wind power on biodiversity.
DEFF Research Database (Denmark)
Schur, Nadine; Hürlimann, Eveline; Garba, Amadou
2011-01-01
Schistosomiasis is a water-based disease that is believed to affect over 200 million people with an estimated 97% of the infections concentrated in Africa. However, these statistics are largely based on population re-adjusted data originally published by Utroska and colleagues more than 20 years...... ago. Hence, these estimates are outdated due to large-scale preventive chemotherapy programs, improved sanitation, water resources development and management, among other reasons. For planning, coordination, and evaluation of control activities, it is essential to possess reliable schistosomiasis...
DeepTravel: a Neural Network Based Travel Time Estimation Model with Auxiliary Supervision
Zhang, Hanyuan; Wu, Hao; Sun, Weiwei; Zheng, Baihua
2018-01-01
Estimating the travel time of a path is of great importance to smart urban mobility. Existing approaches are either based on estimating the time cost of each road segment which are not able to capture many cross-segment complex factors, or designed heuristically in a non-learning-based way which fail to utilize the existing abundant temporal labels of the data, i.e., the time stamp of each trajectory point. In this paper, we leverage on new development of deep neural networks and propose a no...
Knowledge based word-concept model estimation and refinement for biomedical text mining.
Jimeno Yepes, Antonio; Berlanga, Rafael
2015-02-01
Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Hong, Sanghyun; Erdogan, Gurkan; Hedrick, Karl; Borrelli, Francesco
2013-05-01
The estimation of the tyre-road friction coefficient is fundamental for vehicle control systems. Tyre sensors enable the friction coefficient estimation based on signals extracted directly from tyres. This paper presents a tyre-road friction coefficient estimation algorithm based on tyre lateral deflection obtained from lateral acceleration. The lateral acceleration is measured by wireless three-dimensional accelerometers embedded inside the tyres. The proposed algorithm first determines the contact patch using a radial acceleration profile. Then, the portion of the lateral acceleration profile, only inside the tyre-road contact patch, is used to estimate the friction coefficient through a tyre brush model and a simple tyre model. The proposed strategy accounts for orientation-variation of accelerometer body frame during tyre rotation. The effectiveness and performance of the algorithm are demonstrated through finite element model simulations and experimental tests with small tyre slip angles on different road surface conditions.
Yang, Lu; Linder, Mark W
2013-01-01
In this chapter, we use calculation of estimated warfarin maintenance dosage as an example to illustrate how to develop a multiple linear regression model to quantify the relationship between several independent variables (e.g., patients' genotype information) and a dependent variable (e.g., measureable clinical outcome).
Multi-Model Estimation Based Moving Object Detection for Aerial Video
Directory of Open Access Journals (Sweden)
Yanning Zhang
2015-04-01
Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
Joetzjer, E.; Pillet, M.; Ciais, P.; Barbier, N.; Chave, J.; Schlund, M.; Maignan, F.; Barichivich, J.; Luyssaert, S.; Hérault, B.; von Poncet, F.; Poulter, B.
2017-07-01
Despite advances in Earth observation and modeling, estimating tropical biomass remains a challenge. Recent work suggests that integrating satellite measurements of canopy height within ecosystem models is a promising approach to infer biomass. We tested the feasibility of this approach to retrieve aboveground biomass (AGB) at three tropical forest sites by assimilating remotely sensed canopy height derived from a texture analysis algorithm applied to the high-resolution Pleiades imager in the Organizing Carbon and Hydrology in Dynamic Ecosystems Canopy (ORCHIDEE-CAN) ecosystem model. While mean AGB could be estimated within 10% of AGB derived from census data in average across sites, canopy height derived from Pleiades product was spatially too smooth, thus unable to accurately resolve large height (and biomass) variations within the site considered. The error budget was evaluated in details, and systematic errors related to the ORCHIDEE-CAN structure contribute as a secondary source of error and could be overcome by using improved allometric equations.
A Physically—Based Geometry Model for Transport Distance Estimation of Rainfall-Eroded Soil Sediment
Directory of Open Access Journals (Sweden)
Qian-Gui Zhang
2016-01-01
Full Text Available Estimations of rainfall-induced soil erosion are mostly derived from the weight of sediment measured in natural runoff. The transport distance of eroded soil is important for evaluating landscape evolution but is difficult to estimate, mainly because it cannot be linked directly to the eroded sediment weight. The volume of eroded soil is easier to calculate visually using popular imaging tools, which can aid in estimating the transport distance of eroded soil through geometry relationships. In this study, we present a straightforward geometry model to predict the maximum sediment transport distance incurred by rainfall events of various intensity and duration. In order to verify our geometry prediction model, a series of experiments are reported in the form of a sediment volume. The results show that cumulative rainfall has a linear relationship with the total volume of eroded soil. The geometry model can accurately estimate the maximum transport distance of eroded soil by cumulative rainfall, with a low root-mean-square error (4.7–4.8 and a strong linear correlation (0.74–0.86.
International Nuclear Information System (INIS)
Maruyama, Wakae; Aoki, Yasunobu
2006-01-01
The health risk of dioxins and dioxin-like compounds to humans was analyzed quantitatively using experimental data and mathematical models. To quantify the toxicity of a mixture of three dioxin congeners, we calculated the new relative potencies (REPs) for 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), 1,2,3,7,8-pentachlorodibenzo-p-dioxin (PeCDD), and 2,3,4,7,8- pentachlorodibenzofuran (PeCDF), focusing on their tumor promotion activity. We applied a liver foci formation assay to female SD rats after repeated oral administration of dioxins. The REP of dioxin for a rat was determined using dioxin concentration and the number of the foci in rat liver. A physiologically based pharmacokinetic model (PBPK model) was used for interspecies extrapolation targeting on dioxin concentration in liver. Toxic dose for human was determined by back-estimation with a human PBPK model, assuming that the same concentration in the target tissue may cause the same level of effect in rats and humans, and the REP for human was determined by the toxic dose obtained. The calculated REPs for TCDD, PeCDD, and PeCDF were 1.0, 0.34, and 0.05 for rats, respectively, and the REPs for humans were almost the same as those for rats. These values were different from the toxic equivalency factors (TEFs) presented previously (Van den Berg, M., Birnbaum, L., Bosveld, A.T.C., Brunstrom, B., Cook, P., Feeley, M., Giesy, J.P., Hanberg, A., Hasegawa, R., Kennedy, S.W., Kubiak, T., Larsen, J.C., Rolaf van Leeuwen, F.X., Liem, A.K.D., Nolt, C., Peterson, R.E., Poellinger. L., Safe, S., Schrenk, D., Tillitt, D, Tysklind, M., Younes, M., Waern, F., Zacharewski, T., 1998. Toxic equivalency factors (TEFs) for PCBs, PCDDs, PCDFs for humans and wildlife. Environ. Health Perspect. 106, 775-792). The relative risk of excess liver cancer for Japanese people in general was 1.7-6.5 x 10 -7 by TCDD only, and 2.9-11 x 10 -7 by the three dioxins at the present level of contamination
The HINTS is designed to produce reliable estimates at the national and regional levels. GIS maps using HINTS data have been used to provide a visual representation of possible geographic relationships in HINTS cancer-related variables.
Directory of Open Access Journals (Sweden)
Aurélien Tellier
Full Text Available Understanding the processes and conditions under which populations diverge to give rise to distinct species is a central question in evolutionary biology. Since recently diverged populations have high levels of shared polymorphisms, it is challenging to distinguish between recent divergence with no (or very low inter-population gene flow and older splitting events with subsequent gene flow. Recently published methods to infer speciation parameters under the isolation-migration framework are based on summarizing polymorphism data at multiple loci in two species using the joint site-frequency spectrum (JSFS. We have developed two improvements of these methods based on a more extensive use of the JSFS classes of polymorphisms for species with high intra-locus recombination rates. First, using a likelihood based method, we demonstrate that taking into account low-frequency polymorphisms shared between species significantly improves the joint estimation of the divergence time and gene flow between species. Second, we introduce a local linear regression algorithm that considerably reduces the computational time and allows for the estimation of unequal rates of gene flow between species. We also investigate which summary statistics from the JSFS allow the greatest estimation accuracy for divergence time and migration rates for low (around 10 and high (around 100 numbers of loci. Focusing on cases with low numbers of loci and high intra-locus recombination rates we show that our methods for the estimation of divergence time and migration rates are more precise than existing approaches.
Directory of Open Access Journals (Sweden)
Delong Feng
2016-05-01
Full Text Available Remaining useful life estimation of the prognostics and health management technique is a complicated and difficult research question for maintenance. In this article, we consider the problem of prognostics modeling and estimation of the turbofan engine under complicated circumstances and propose a kernel principal component analysis–based degradation model and remaining useful life estimation method for such aircraft engine. We first analyze the output data created by the turbofan engine thermodynamic simulation that is based on the kernel principal component analysis method and then distinguish the qualitative and quantitative relationships between the key factors. Next, we build a degradation model for the engine fault based on the following assumptions: the engine has only had constant failure (i.e. no sudden failure is included, and the engine has a Wiener process, which is a covariate stand for the engine system drift. To predict the remaining useful life of the turbofan engine, we built a health index based on the degradation model and used the method of maximum likelihood and the data from the thermodynamic simulation model to estimate the parameters of this degradation model. Through the data analysis, we obtained a trend model of the regression curve line that fits with the actual statistical data. Based on the predicted health index model and the data trend model, we estimate the remaining useful life of the aircraft engine as the index reaches zero. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this prediction method that we propose. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this proposed method, the precision of the method can reach to 98.9% and the average precision is 95.8%.
UAV based 3D digital surface model to estimate paleolandscape in high mountainous environment
Mészáros, János; Árvai, Mátyás; Kohán, Balázs; Deák, Márton; Nagy, Balázs
2016-04-01
reliable results and resolution. Based on the sediment layers of the peat bog together with the generated 3D surface model the paleoenvironment, the largest paleowater level can be reconstructed and we can estimate the dimension of the landslide which created the basin of the peat bog.
Video-Quality Estimation Based on Reduced-Reference Model Employing Activity-Difference
Yamada, Toru; Miyamoto, Yoshihiro; Senda, Yuzo; Serizawa, Masahiro
This paper presents a Reduced-reference based video-quality estimation method suitable for individual end-user quality monitoring of IPTV services. With the proposed method, the activity values for individual given-size pixel blocks of an original video are transmitted to end-user terminals. At the end-user terminals, the video quality of a received video is estimated on the basis of the activity-difference between the original video and the received video. Psychovisual weightings and video-quality score adjustments for fatal degradations are applied to improve estimation accuracy. In addition, low-bit-rate transmission is achieved by using temporal sub-sampling and by transmitting only the lower six bits of each activity value. The proposed method achieves accurate video quality estimation using only low-bit-rate original video information (15kbps for SDTV). The correlation coefficient between actual subjective video quality and estimated quality is 0.901 with 15kbps side information. The proposed method does not need computationally demanding spatial and gain-and-offset registrations. Therefore, it is suitable for real-time video-quality monitoring in IPTV services.
Type-specific human papillomavirus biological features: validated model-based estimates.
Directory of Open Access Journals (Sweden)
Iacopo Baussano
Full Text Available Infection with high-risk (hr human papillomavirus (HPV is considered the necessary cause of cervical cancer. Vaccination against HPV16 and 18 types, which are responsible of about 75% of cervical cancer worldwide, is expected to have a major global impact on cervical cancer occurrence. Valid estimates of the parameters that regulate the natural history of hrHPV infections are crucial to draw reliable projections of the impact of vaccination. We devised a mathematical model to estimate the probability of infection transmission, the rate of clearance, and the patterns of immune response following the clearance of infection of 13 hrHPV types. To test the validity of our estimates, we fitted the same transmission model to two large independent datasets from Italy and Sweden and assessed finding consistency. The two populations, both unvaccinated, differed substantially by sexual behaviour, age distribution, and study setting (screening for cervical cancer or Chlamydia trachomatis infection. Estimated transmission probability of hrHPV types (80% for HPV16, 73%-82% for HPV18, and above 50% for most other types; clearance rates decreasing as a function of time since infection; and partial protection against re-infection with the same hrHPV type (approximately 20% for HPV16 and 50% for the other types were similar in the two countries. The model could accurately predict the HPV16 prevalence observed in Italy among women who were not infected three years before. In conclusion, our models inform on biological parameters that cannot at the moment be measured directly from any empirical data but are essential to forecast the impact of HPV vaccination programmes.
State-Space Dynamic Model for Estimation of Radon Entry Rate, based on Kalman Filtering
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Jílek, K.
2007-01-01
Roč. 98, - (2007), s. 285-297 ISSN 0265-931X Grant - others:GA SÚJB JC_11/2006 Institutional research plan: CEZ:AV0Z10300504 Keywords : air ventilation rate * radon entry rate * state-space modeling * extended Kalman filter * maximum likelihood estimation * prediction error decomposition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.963, year: 2007
Czech Academy of Sciences Publication Activity Database
Yáñez-Rausell, L.; Malenovský, Z.; Rautiainen, M.; Clevers, J G P W.; Lukeš, Petr; Hanuš, Jan; Schaepman, M. E.
2015-01-01
Roč. 8, č. 4 (2015), s. 1534-1544 ISSN 1939-1404 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : Chlorophyll a plus b estimation * CHRIS-PROBA * coniferous forest * continuum removal * discrete anisotropic radiative transfer model (DART) * needle-leaf * Norway spruce * optical indices * PARAS * PROSPECT * radiative transfer * recollision probability Subject RIV: EH - Ecology, Behaviour Impact factor: 2.145, year: 2015
Estimating Biomass of Barley Using Crop Surface Models (CSMs Derived from UAV-Based RGB Imaging
Directory of Open Access Journals (Sweden)
Juliane Bendig
2014-10-01
Full Text Available Crop monitoring is important in precision agriculture. Estimating above-ground biomass helps to monitor crop vitality and to predict yield. In this study, we estimated fresh and dry biomass on a summer barley test site with 18 cultivars and two nitrogen (N-treatments using the plant height (PH from crop surface models (CSMs. The super-high resolution, multi-temporal (1 cm/pixel CSMs were derived from red, green, blue (RGB images captured from a small unmanned aerial vehicle (UAV. Comparison with PH reference measurements yielded an R2 of 0.92. The test site with different cultivars and treatments was monitored during “Biologische Bundesanstalt, Bundessortenamt und CHemische Industrie” (BBCH Stages 24–89. A high correlation was found between PH from CSMs and fresh biomass (R2 = 0.81 and dry biomass (R2 = 0.82. Five models for above-ground fresh and dry biomass estimation were tested by cross-validation. Modelling biomass between different N-treatments for fresh biomass produced the best results (R2 = 0.71. The main limitation was the influence of lodging cultivars in the later growth stages, producing irregular plant heights. The method has potential for future application by non-professionals, i.e., farmers.
The cost of universal health care in India: a model based estimate.
Directory of Open Access Journals (Sweden)
Shankar Prinja
Full Text Available INTRODUCTION: As high out-of-pocket healthcare expenses pose heavy financial burden on the families, Government of India is considering a variety of financing and delivery options to universalize health care services. Hence, an estimate of the cost of delivering universal health care services is needed. METHODS: We developed a model to estimate recurrent and annual costs for providing health services through a mix of public and private providers in Chandigarh located in northern India. Necessary health services required to deliver good quality care were defined by the Indian Public Health Standards. National Sample Survey data was utilized to estimate disease burden. In addition, morbidity and treatment data was collected from two secondary and two tertiary care hospitals. The unit cost of treatment was estimated from the published literature. For diseases where data on treatment cost was not available, we collected data on standard treatment protocols and cost of care from local health providers. RESULTS: We estimate that the cost of universal health care delivery through the existing mix of public and private health institutions would be INR 1713 (USD 38, 95%CI USD 18-73 per person per annum in India. This cost would be 24% higher, if branded drugs are used. Extrapolation of these costs to entire country indicates that Indian government needs to spend 3.8% (2.1%-6.8% of the GDP for universalizing health care services. CONCLUSION: The cost of universal health care delivered through a combination of public and private providers is estimated to be INR 1713 per capita per year in India. Important issues such as delivery strategy for ensuring quality, reducing inequities in access, and managing the growth of health care demand need be explored.
The cost of universal health care in India: a model based estimate.
Prinja, Shankar; Bahuguna, Pankaj; Pinto, Andrew D; Sharma, Atul; Bharaj, Gursimer; Kumar, Vishal; Tripathy, Jaya Prasad; Kaur, Manmeet; Kumar, Rajesh
2012-01-01
As high out-of-pocket healthcare expenses pose heavy financial burden on the families, Government of India is considering a variety of financing and delivery options to universalize health care services. Hence, an estimate of the cost of delivering universal health care services is needed. We developed a model to estimate recurrent and annual costs for providing health services through a mix of public and private providers in Chandigarh located in northern India. Necessary health services required to deliver good quality care were defined by the Indian Public Health Standards. National Sample Survey data was utilized to estimate disease burden. In addition, morbidity and treatment data was collected from two secondary and two tertiary care hospitals. The unit cost of treatment was estimated from the published literature. For diseases where data on treatment cost was not available, we collected data on standard treatment protocols and cost of care from local health providers. We estimate that the cost of universal health care delivery through the existing mix of public and private health institutions would be INR 1713 (USD 38, 95%CI USD 18-73) per person per annum in India. This cost would be 24% higher, if branded drugs are used. Extrapolation of these costs to entire country indicates that Indian government needs to spend 3.8% (2.1%-6.8%) of the GDP for universalizing health care services. The cost of universal health care delivered through a combination of public and private providers is estimated to be INR 1713 per capita per year in India. Important issues such as delivery strategy for ensuring quality, reducing inequities in access, and managing the growth of health care demand need be explored.
Energy Technology Data Exchange (ETDEWEB)
Pohjola, J.; Turunen, J.; Lipping, T. [Tampere Univ. of Technology (Finland); Ikonen, A.
2014-03-15
In this working report the modelling effort of future landscape development and surface water body formation at the modelling area in the vicinity of the Olkiluoto Island is presented. Estimation of the features of future surface water bodies is based on probabilistic terrain and land uplift models presented in previous working reports. The estimation is done using a GIS-based toolbox called UNTAMO. The future surface water bodies are estimated in 10 000 years' time span with 1000 years' intervals for the safety assessment of disposal of spent nuclear fuel at the Olkiluoto site. In the report a brief overview on the techniques used for probabilistic terrain modelling, land uplift modelling and hydrological modelling are presented first. The latter part of the report describes the results of the modelling effort. The main features of the future landscape - the four lakes forming in the vicinity of the Olkiluoto Island - are identified and the probabilistic model of the shoreline displacement is presented. The area and volume of the four lakes is modelled in a probabilistic manner. All the simulations have been performed for three scenarios two of which are based on 10 realizations of the probabilistic digital terrain model (DTM) and 10 realizations of the probabilistic land uplift model. These two scenarios differ from each other by the eustatic curve used in the land uplift model. The third scenario employs 50 realizations of the probabilistic DTM while a deterministic land uplift model, derived solely from the current land uplift rate, is used. The results indicate that the two scenarios based on the probabilistic land uplift model behave in a similar manner while the third model overestimates past and future land uplift rates. The main features of the landscape are nevertheless similar also for the third scenario. Prediction results for the volumes of the future lakes indicate that a couple of highly probably lake formation scenarios can be identified
International Nuclear Information System (INIS)
Pohjola, J.; Turunen, J.; Lipping, T.; Ikonen, A.
2014-03-01
In this working report the modelling effort of future landscape development and surface water body formation at the modelling area in the vicinity of the Olkiluoto Island is presented. Estimation of the features of future surface water bodies is based on probabilistic terrain and land uplift models presented in previous working reports. The estimation is done using a GIS-based toolbox called UNTAMO. The future surface water bodies are estimated in 10 000 years' time span with 1000 years' intervals for the safety assessment of disposal of spent nuclear fuel at the Olkiluoto site. In the report a brief overview on the techniques used for probabilistic terrain modelling, land uplift modelling and hydrological modelling are presented first. The latter part of the report describes the results of the modelling effort. The main features of the future landscape - the four lakes forming in the vicinity of the Olkiluoto Island - are identified and the probabilistic model of the shoreline displacement is presented. The area and volume of the four lakes is modelled in a probabilistic manner. All the simulations have been performed for three scenarios two of which are based on 10 realizations of the probabilistic digital terrain model (DTM) and 10 realizations of the probabilistic land uplift model. These two scenarios differ from each other by the eustatic curve used in the land uplift model. The third scenario employs 50 realizations of the probabilistic DTM while a deterministic land uplift model, derived solely from the current land uplift rate, is used. The results indicate that the two scenarios based on the probabilistic land uplift model behave in a similar manner while the third model overestimates past and future land uplift rates. The main features of the landscape are nevertheless similar also for the third scenario. Prediction results for the volumes of the future lakes indicate that a couple of highly probably lake formation scenarios can be identified with other
U.S. Environmental Protection Agency — This dataset provides the city-specific air exchange rate measurements, modeled, literature-based as well as housing characteristics. This dataset is associated with...
Directory of Open Access Journals (Sweden)
R. Hajiabadi
2016-10-01
Full Text Available Introduction One reason for the complexity of hydrological phenomena prediction, especially time series is existence of features such as trend, noise and high-frequency oscillations. These complex features, especially noise, can be detected or removed by preprocessing. Appropriate preprocessing causes estimation of these phenomena become easier. Preprocessing in the data driven models such as artificial neural network, gene expression programming, support vector machine, is more effective because the quality of data in these models is important. Present study, by considering diagnosing and data transformation as two different preprocessing, tries to improve the results of intelligent models. In this study two different intelligent models, Artificial Neural Network and Gene Expression Programming, are applied to estimation of daily suspended sediment load. Wavelet transforms and logarithmic transformation is used for diagnosing and data transformation, respectively. Finally, the impacts of preprocessing on the results of intelligent models are evaluated. Materials and Methods In this study, Gene Expression Programming and Artificial Neural Network are used as intelligent models for suspended sediment load estimation, then the impacts of diagnosing and logarithmic transformations approaches as data preprocessor are evaluated and compared to the result improvement. Two different logarithmic transforms are considered in this research, LN and LOG. Wavelet transformation is used to time series denoising. In order to denoising by wavelet transforms, first, time series can be decomposed at one level (Approximation part and detail part and second, high-frequency part (detail will be removed as noise. According to the ability of gene expression programming and artificial neural network to analysis nonlinear systems; daily values of suspended sediment load of the Skunk River in USA, during a 5-year period, are investigated and then estimated.4 years of
Aandahl, R. Zachariah; Reyes, Josephine F.; Sisson, Scott A.; Tanaka, Mark M.
2012-01-01
Variable numbers of tandem repeats (VNTR) typing is widely used for studying the bacterial cause of tuberculosis. Knowledge of the rate of mutation of VNTR loci facilitates the study of the evolution and epidemiology of Mycobacterium tuberculosis. Previous studies have applied population genetic models to estimate the mutation rate, leading to estimates varying widely from around to per locus per year. Resolving this issue using more detailed models and statistical methods would lead to improved inference in the molecular epidemiology of tuberculosis. Here, we use a model-based approach that incorporates two alternative forms of a stepwise mutation process for VNTR evolution within an epidemiological model of disease transmission. Using this model in a Bayesian framework we estimate the mutation rate of VNTR in M. tuberculosis from four published data sets of VNTR profiles from Albania, Iran, Morocco and Venezuela. In the first variant, the mutation rate increases linearly with respect to repeat numbers (linear model); in the second, the mutation rate is constant across repeat numbers (constant model). We find that under the constant model, the mean mutation rate per locus is (95% CI: ,)and under the linear model, the mean mutation rate per locus per repeat unit is (95% CI: ,). These new estimates represent a high rate of mutation at VNTR loci compared to previous estimates. To compare the two models we use posterior predictive checks to ascertain which of the two models is better able to reproduce the observed data. From this procedure we find that the linear model performs better than the constant model. The general framework we use allows the possibility of extending the analysis to more complex models in the future. PMID:22761563
Directory of Open Access Journals (Sweden)
Kyu-Sik Park
2015-01-01
Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.
Estimation of the PCR efficiency based on a size-dependent modelling of the amplification process
Lalam, N.; Jacob, C.; Jagers, P.
2005-01-01
We propose a stochastic modelling of the PCR amplification process by a size-dependent branching process starting as a supercritical Bienaymé–Galton–Watson transient phase and then having a saturation near-critical size-dependent phase. This model based on the concept of saturation allows one to
Model based estimation for multi-modal user interface component selection
CSIR Research Space (South Africa)
Coetzee, L
2009-12-01
Full Text Available and literacy level of the user should be taken into account. This paper presents one approach to develop a cost-based model which can be used to derive appropriate mappings for specific user profiles. The model is explained through a number of small examples...
Directory of Open Access Journals (Sweden)
Xin Lu
2018-03-01
Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.
A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems
Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad
2015-02-01
As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.
Gerberich, Matthew W.; Oleson, Steven R.
2013-01-01
The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.
Gong, Qi; Schaubel, Douglas E
2017-03-01
Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.
Alfarano, Simone; Lux, Thomas; Wagner, Friedrich
2006-10-01
Following Alfarano et al. [Estimation of agent-based models: the case of an asymmetric herding model, Comput. Econ. 26 (2005) 19-49; Excess volatility and herding in an artificial financial market: analytical approach and estimation, in: W. Franz, H. Ramser, M. Stadler (Eds.), Funktionsfähigkeit und Stabilität von Finanzmärkten, Mohr Siebeck, Tübingen, 2005, pp. 241-254], we consider a simple agent-based model of a highly stylized financial market. The model takes Kirman's ant process [A. Kirman, Epidemics of opinion and speculative bubbles in financial markets, in: M.P. Taylor (Ed.), Money and Financial Markets, Blackwell, Cambridge, 1991, pp. 354-368; A. Kirman, Ants, rationality, and recruitment, Q. J. Econ. 108 (1993) 137-156] of mimetic contagion as its starting point, but allows for asymmetry in the attractiveness of both groups. Embedding the contagion process into a standard asset-pricing framework, and identifying the abstract groups of the herding model as chartists and fundamentalist traders, a market with periodic bubbles and bursts is obtained. Taking stock of the availability of a closed-form solution for the stationary distribution of returns for this model, we can estimate its parameters via maximum likelihood. Expanding our earlier work, this paper presents pertinent estimates for the Australian dollar/US dollar exchange rate and the Australian stock market index. As it turns out, our model indicates dominance of fundamentalist behavior in both the stock and foreign exchange market.
DEFF Research Database (Denmark)
Jacobsen, Martin; Martinussen, Torben
2016-01-01
Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...
Experimental data bases useful for quantification of model uncertainties in best estimate codes
International Nuclear Information System (INIS)
Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.
1988-01-01
A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs
Maadooliat, Mehdi
2015-10-21
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.
2015-01-01
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
International Nuclear Information System (INIS)
Wang, Xiaojuan; Tse, Peter W; Dordjevich, Alexandar
2011-01-01
The reflection signal from a defect in the process of guided wave-based pipeline inspection usually includes sufficient information to detect and define the defect. In previous research, it has been found that the reflection of guided waves from even a complex defect primarily results from the interference between reflection components generated at the front and the back edges of the defect. The respective contribution of different parameters of a defect to the overall reflection can be affected by the features of the two primary reflection components. The identification of these components embedded in the reflection signal is therefore useful in characterizing the concerned defect. In this research, we propose a method of model-based parameter estimation with the aid of the Hilbert–Huang transform technique for the purpose of decomposition of a reflection signal to enable characterization of the pipeline defect. Once two primary edge reflection components are decomposed and identified, the distance between the reflection positions, which closely relates to the axial length of the defect, could be easily and accurately determined. Considering the irregular profiles of complex pipeline defects at their two edges, which is often the case in real situations, the average of varied axial lengths of such a defect along the circumference of the pipeline is used in this paper as the characteristic value of actual axial length for comparison purpose. The experimental results of artificial defects and real corrosion in sample pipes were considered in this paper to demonstrate the effectiveness of the proposed method
Model-based dynamic multi-parameter method for peak power estimation of lithium-ion batteries
Sun, F.; Xiong, R.; He, H.; Li, W.; Aussems, J.E.E.
2012-01-01
A model-based dynamic multi-parameter method for peak power estimation is proposed for batteries and battery management systems (BMSs) used in hybrid electric vehicles (HEVs). The available power must be accurately calculated in order to not damage the battery by over charging or over discharging or
International Nuclear Information System (INIS)
Cerny, V.
1983-01-01
A model based estimate is presented of the geometrical acceptance of the HYPERON spectrometer for the detection of the e + e - pairs in the proposed lepton experiment. The results of the Monte Carlo calculation show that the expected acceptance is fairly high. (author)
Modeling mangrove biomass using remote sensing based age and growth estimates
Lagomasino, D.; Fatoyinbo, T. E.; Feliciano, E. A.; Lee, S. K.; Trettin, C.; Mangora, M.; Rahman, M.
2016-12-01
Mangroves are highly regarded coastal forests because of their ecosystem services and high carbon storage potential. In addition, these forests can develop rapidly in locations where congenial environmental conditions and sediment supply are available. Monitoring the growth and age of developing mangrove forests is crucial for sustainable management and estimating carbon stocks. Combining imagery from radar and optical satellites (e.g., TanDEM-X and Landsat), we can estimate young mangrove growth and age at regional and continental scales. We used TanDEM-X radar interferometry for modeling canopy height in 2013 and Landsat to measure land cover change from 1990 to 2013. Annual NDVI composites were determined for each calendar year between 1990 and 2013. New land areas gained from the transition of water to vegetation were determined by the differences in annual NDVI composites and the reference year 2013. The year of the greatest NDVI difference that met the threshold criteria was used as the initial tree height (0 m). Annual canopy height growth rates were estimated by the duration between land generation times and 2013 canopy height models derived from TanDEM-X and very-high resolution optical data. In this presentation, we compare growth rates and biomass accumulation in mangrove forests at four river deltas; the Zambezi (Mozambique), Rufiji (Tanzania), Ganges (Bangladesh), and Mekong (Vietnam). The spatial patterns of growth rates coincided with characteristic successional paradigms and stream morphology, where the maximum growth rates typically occurred along prograding creek banks. Initial comparisons between height-only and growth-age biomass indicate that the latter tend to overestimate biomass for younger forest stands of similar height. Both the vertical (e.g., canopy height) and horizontal (e.g., expansion) growth rates measured from remote sensing can garner important information regarding mangrove succession and primary productivity. Continued research
Directory of Open Access Journals (Sweden)
Hongjie Wu
2013-01-01
Full Text Available State of charge (SOC is a critical factor to guarantee that a battery system is operating in a safe and reliable manner. Many uncertainties and noises, such as fluctuating current, sensor measurement accuracy and bias, temperature effects, calibration errors or even sensor failure, etc. pose a challenge to the accurate estimation of SOC in real applications. This paper adds two contributions to the existing literature. First, the auto regressive exogenous (ARX model is proposed here to simulate the battery nonlinear dynamics. Due to its discrete form and ease of implemention, this straightforward approach could be more suitable for real applications. Second, its order selection principle and parameter identification method is illustrated in detail in this paper. The hybrid pulse power characterization (HPPC cycles are implemented on the 60AH LiFePO4 battery module for the model identification and validation. Based on the proposed ARX model, SOC estimation is pursued using the extended Kalman filter. Evaluation of the adaptability of the battery models and robustness of the SOC estimation algorithm are also verified. The results indicate that the SOC estimation method using the Kalman filter based on the ARX model shows great performance. It increases the model output voltage accuracy, thereby having the potential to be used in real applications, such as EVs and HEVs.
Projection-based Bayesian recursive estimation of ARX model with uniform innovations
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Pavelková, Lenka
2007-01-01
Roč. 56, 9/10 (2007), s. 646-655 ISSN 0167-6911 R&D Projects: GA AV ČR 1ET100750401; GA MŠk 2C06001; GA MDS 1F43A/003/120 Institutional research plan: CEZ:AV0Z10750506 Keywords : ARX model * Bayesian recursive estimation * Uniform distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.634, year: 2007 http://dx.doi.org/10.1016/j.sysconle.2007.03.005
Wang, Tingting; Sun, Fubao; Liu, Changming; Liu, Wenbin; Wang, Hong
2017-04-01
An accurate estimation of ET in humid catchments is essential in water-energy budget research and water resource management etc, while it remains a huge challenge and there is no well accepted explanation for the difficulty of annual ET estimation in humid catchments so far. Here we presents the ET estimation in 102 humid catchments over China based on the Budyko framework and two hydrological models: abcd model and Xin'anjiang mdoel, in comparison with ET calculated from the water balance equation (ETwb) on the ground that the ΔS is approximately zero at multiannual and annual time scale. We provides a possible explanation for this poorly annual ET estimation in humid catchments as well. The results show that at multi-annual timescale, the Budyko framework works fine in ET estimation in humid catchments, while at annual time scale, neither the Budyko framework nor the hydrological models can estimate ET well. The major cause for this poorly estimated annual ET in humid catchments is the neglecting of the ΔS in ETwb since it enlarge the variability of real actual evapotranspiration. Much improvement has been made when compared estimated ET + ΔS with those ETwb, and the bigger the catchment area is, the better this improvement is. It provides a reasonable explanation for the poorly estimated annual ET in humid catchments and reveals the important role of the ΔS in ET estimation and validation. We highlight that the annual ΔS shouldn't be taken as zero in water balance equation in humid catchments.
Jakacki, Jaromir; Golenko, Mariya
2014-05-01
Two hydrodynamical models (Princeton Ocean Model (POM) and Parallel Ocean Program (POP)) have been implemented for the Baltic Sea area that consists of locations of the dumped chemical munitions during II War World. The models have been configured based on similar data source - bathymetry, initial conditions and external forces were implemented based on identical data. The horizontal resolutions of the models are also very similar. Several simulations with different initial conditions have been done. Comparison and analysis of the bottom currents from both models have been performed. Based on it estimating of the dangerous area and critical time have been done. Also lagrangian particle tracking and passive tracer were implemented and based on these results probability of the appearing dangerous doses and its time evolution have been presented. This work has been performed in the frame of the MODUM project financially supported by NATO.
Directory of Open Access Journals (Sweden)
Yuqi Guo
2017-08-01
Full Text Available In order to estimate traffic densities in a large-scale urban freeway network in an accurate and timely fashion when traffic sensors do not cover the freeway network completely and thus only local measurement data can be utilized, this paper proposes a decentralized state observer approach based on a macroscopic traffic flow model. Firstly, by using the well-known cell transmission model (CTM, the urban freeway network is modeled in the way of distributed systems. Secondly, based on the model, a decentralized observer is designed. With the help of the Lyapunov function and S-procedure theory, the observer gains are computed by using linear matrix inequality (LMI technique. So, the traffic densities of the whole road network can be estimated by the designed observer. Finally, this method is applied to the outer ring of the Beijing’s second ring road and experimental results demonstrate the effectiveness and applicability of the proposed approach.
Directory of Open Access Journals (Sweden)
C. Suresh Raju
2007-10-01
Full Text Available Estimation of precipitable water (PW in the atmosphere from ground-based Global Positioning System (GPS essentially involves modeling the zenith hydrostatic delay (ZHD in terms of surface Pressure (P_{s} and subtracting it from the corresponding values of zenith tropospheric delay (ZTD to estimate the zenith wet (non-hydrostatic delay (ZWD. This further involves establishing an appropriate model connecting PW and ZWD, which in its simplest case assumed to be similar to that of ZHD. But when the temperature variations are large, for the accurate estimate of PW the variation of the proportionality constant connecting PW and ZWD is to be accounted. For this a water vapor weighted mean temperature (T_{m} has been defined by many investigations, which has to be modeled on a regional basis. For estimating PW over the Indian region from GPS data, a region specific model for T_{m} in terms of surface temperature (T_{s} is developed using the radiosonde measurements from eight India Meteorological Department (IMD stations spread over the sub-continent within a latitude range of 8.5°–32.6° N. Following a similar procedure T_{m}-based models are also evolved for each of these stations and the features of these site-specific models are compared with those of the region-specific model. Applicability of the region-specific and site-specific T_{m}-based models in retrieving PW from GPS data recorded at the IGS sites Bangalore and Hyderabad, is tested by comparing the retrieved values of PW with those estimated from the altitude profile of water vapor measured using radiosonde. The values of ZWD estimated at 00:00 UTC and 12:00 UTC are used to test the validity of the models by estimating the PW using the models and comparing it with those obtained from radiosonde data. The region specific T_{m}-based model is found to be in par with if not better than a
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models
International Nuclear Information System (INIS)
Dhou, S; Hurwitz, M; Lewis, J; Mishra, P
2014-01-01
Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT-based
The Output Cost of Gender Discrimination: A Model-Based Macroeconomic Estimate
Cavalcanti, Tiago V. de V.; Tavares, José
2008-01-01
Gender-based discrimination is a pervasive and costly phenomenon. To a greater or lesser extent, all economies present a gender wage gap, associated with lower female labour force participation rates and higher fertility. This paper presents a growth model where saving, fertility and labour market participation are endogenously determined, and there is wage discrimination. The model is calibrated to mimic the performance of the U.S. economy, including the gender wage gap and relative female l...
Development of a Greek solar map based on solar model estimations
Kambezidis, H. D.; Psiloglou, B. E.; Kavadias, K. A.; Paliatsos, A. G.; Bartzokas, A.
2016-05-01
The realization of Renewable Energy Sources (RES) for power generation as the only environmentally friendly solution, moved solar systems to the forefront of the energy market in the last decade. The capacity of the solar power doubles almost every two years in many European countries, including Greece. This rise has brought the need for reliable predictions of meteorological data that can easily be utilized for proper RES-site allocation. The absence of solar measurements has, therefore, raised the demand for deploying a suitable model in order to create a solar map. The generation of a solar map for Greece, could provide solid foundations on the prediction of the energy production of a solar power plant that is installed in the area, by providing an estimation of the solar energy acquired at each longitude and latitude of the map. In the present work, the well-known Meteorological Radiation Model (MRM), a broadband solar radiation model, is engaged. This model utilizes common meteorological data, such as air temperature, relative humidity, barometric pressure and sunshine duration, in order to calculate solar radiation through MRM for areas where such data are not available. Hourly values of the above meteorological parameters are acquired from 39 meteorological stations, evenly dispersed around Greece; hourly values of solar radiation are calculated from MRM. Then, by using an integrated spatial interpolation method, a Greek solar energy map is generated, providing annual solar energy values all over Greece.
V and V-based remaining fault estimation model for safety–critical software of a nuclear power plant
International Nuclear Information System (INIS)
Eom, Heung-seop; Park, Gee-yong; Jang, Seung-cheol; Son, Han Seong; Kang, Hyun Gook
2013-01-01
Highlights: ► A software fault estimation model based on Bayesian Nets and V and V. ► Use of quantified data derived from qualitative V and V results. ► Faults insertion and elimination process was modeled in the context of probability. ► Systematically estimates the expected number of remaining faults. -- Abstract: Quantitative software reliability measurement approaches have some limitations in demonstrating the proper level of reliability in cases of safety–critical software. One of the more promising alternatives is the use of software development quality information. Particularly in the nuclear industry, regulatory bodies in most countries use both probabilistic and deterministic measures for ensuring the reliability of safety-grade digital computers in NPPs. The point of deterministic criteria is to assess the whole development process and its related activities during the software development life cycle for the acceptance of safety–critical software. In addition software Verification and Validation (V and V) play an important role in this process. In this light, we propose a V and V-based fault estimation method using Bayesian Nets to estimate the remaining faults for safety–critical software after the software development life cycle is completed. By modeling the fault insertion and elimination processes during the whole development phases, the proposed method systematically estimates the expected number of remaining faults.
A structurally based analytic model for estimation of biomass and fuel loads of woodland trees
Robin J. Tausch
2009-01-01
Allometric/structural relationships in tree crowns are a consequence of the physical, physiological, and fluid conduction processes of trees, which control the distribution, efficient support, and growth of foliage in the crown. The structural consequences of these processes are used to develop an analytic model based on the concept of branch orders. A set of...
Bos, Charles S.
2008-01-01
When analysing the volatility related to high frequency financial data, mostly non-parametric approaches based on realised or bipower variation are applied. This article instead starts from a continuous time diffusion model and derives a parametric analog at high frequency for it, allowing
α-Decomposition for estimating parameters in common cause failure modeling based on causal inference
International Nuclear Information System (INIS)
Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi
2013-01-01
The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system
Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun
2017-08-01
Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2 = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.
Directory of Open Access Journals (Sweden)
Yong Tian
2014-12-01
Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.
DEFF Research Database (Denmark)
He, Xin; Vejen, Flemming; Stisen, Simon
2011-01-01
of precipitation compared with rain-gauge-based methods, thus providing the basis for better water resources assessments. The radar QPE algorithm called ARNE is a distance-dependent areal estimation method that merges radar data with ground surface observations. The method was applied to the Skjern River catchment...... in western Denmark where alternative precipitation estimates were also used as input to an integrated hydrologic model. The hydrologic responses from the model were analyzed by comparing radar- and ground-based precipitation input scenarios. Results showed that radar QPE products are able to generate...... reliable simulations of stream flow and water balance. The potential of using radar-based precipitation was found to be especially high at a smaller scale, where the impact of spatial resolution was evident from the stream discharge results. Also, groundwater recharge was shown to be sensitive...
Development and validation of a Kalman filter-based model for vehicle slip angle estimation
Gadola, M.; Chindamo, D.; Romano, M.; Padula, F.
2014-01-01
It is well known that vehicle slip angle is one of the most difficult parameters to measure on a vehicle during testing or racing activities. Moreover, the appropriate sensor is very expensive and it is often difficult to fit to a car, especially on race cars. We propose here a strategy to eliminate the need for this sensor by using a mathematical tool which gives a good estimation of the vehicle slip angle. A single-track car model, coupled with an extended Kalman filter, was used in order to achieve the result. Moreover, a tuning procedure is proposed that takes into consideration both nonlinear and saturation characteristics typical of vehicle lateral dynamics. The effectiveness of the proposed algorithm has been proven by both simulation results and real-world data.
Directory of Open Access Journals (Sweden)
Mariev Oleg
2016-09-01
Full Text Available The aim of this paper is twofold. First, it is to answer the question of whether Russia is successful in attracting foreign direct investment (FDI. Second, it is to identify partner countries that “overinvest” and “underinvest” in the Russian economy. We do this by calculating potential FDI inflows to Russia and comparing them with actual values. This research is associated with the empirical estimation of factors explaining FDI flows between countries. The methodological foundation used for the research is the gravity model of foreign direct investment. In discussing the pros and cons of different econometric methods of the estimation gravity equation, we conclude that the Poisson pseudo maximum likelihood method with instrumental variables (IV PPML is one of the best options in our case. Using a database covering about 70% of FDI flows for the period of 2001-2011, we discover the following factors that explain the variance of bilateral FDI flows in the world economy: GDP value of investing country, GDP value of recipient country, distance between countries, remoteness of investor country, remoteness of recipient country, level of institutions development in host country, wage level in host country, membership of two countries in a regional economic union, common official language, common border and colonial relationships between countries in the past. The potential values of FDI inflows are calculated using coefficients of regressors from the econometric model. We discover that the Russian economy performs very well in attracting FDI: the actual FDI inflows exceed potential values by 1.72 times. Large developed countries (France, Germany, UK, Italy overinvest in the Russian economy, while smaller and less developed countries (Czech Republic, Belarus, Denmark, Ukraine underinvest in Russia. Countries of Southeast Asia (China, South Korea, Japan also underinvest in the Russian economy.
Observation- and model-based estimates of particulate dry nitrogen deposition to the oceans
Directory of Open Access Journals (Sweden)
A. R. Baker
2017-07-01
expected to be more robust than TM4, while TM4 gives access to speciated parameters (NO3− and NH4+ that are more relevant to the observed parameters and which are not available in ACCMIP. Dry deposition fluxes (CalDep were calculated from the observed concentrations using estimates of dry deposition velocities. Model–observation ratios (RA, n, weighted by grid-cell area and number of observations, were used to assess the performance of the models. Comparison in the three study regions suggests that TM4 overestimates NO3− concentrations (RA, n = 1.4–2.9 and underestimates NH4+ concentrations (RA, n = 0.5–0.7, with spatial distributions in the tropical Atlantic and northern Indian Ocean not being reproduced by the model. In the case of NH4+ in the Indian Ocean, this discrepancy was probably due to seasonal biases in the sampling. Similar patterns were observed in the various comparisons of CalDep to ModDep (RA, n = 0.6–2.6 for NO3−, 0.6–3.1 for NH4+. Values of RA, n for NHx CalDep–ModDep comparisons were approximately double the corresponding values for NH4+ CalDep–ModDep comparisons due to the significant fraction of gas-phase NH3 deposition incorporated in the TM4 and ACCMIP NHx model products. All of the comparisons suffered due to the scarcity of observational data and the large uncertainty in dry deposition velocities used to derive deposition fluxes from concentrations. These uncertainties have been a major limitation on estimates of the flux of material to the oceans for several decades. Recommendations are made for improvements in N deposition estimation through changes in observations, modelling and model–observation comparison procedures. Validation of modelled dry deposition requires effective comparisons to observable aerosol-phase species' concentrations, and this cannot be achieved if model products only report dry deposition flux over the ocean.
Raju, Subramanian; Saibaba, Saroja
2016-09-01
The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
Directory of Open Access Journals (Sweden)
Wei He
Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.
A Transmission-Cost-Based Model to Estimate the Amount of Market-Integrable Wind Resources
DEFF Research Database (Denmark)
Morales González, Juan Miguel; Pinson, Pierre; Madsen, Henrik
2012-01-01
are made to share the expenses in transmission derived from their integration, they may see the doors of electricity markets closed for not being competitive enough. This paper presents a model to decide the amount of wind resources that are economically exploitable at a given location from a transmission......In the pursuit of the large-scale integration of wind power production, it is imperative to evaluate plausible frictions among the stochastic nature of wind generation, electricity markets, and the investments in transmission required to accommodate larger amounts of wind. If wind producers......-cost perspective. This model accounts for the uncertain character of wind by using a modeling framework based on stochastic optimization, simulates market barriers by means of a bi-level structure, and considers the financial risk of investments in transmission through the conditional value-at-risk. The major...
International Nuclear Information System (INIS)
Kim, Kyong Ju; Yun, Won Gun; Cho, Namho; Ha, Jikwang
2017-01-01
The late rise in global concern for environmental issues such as global warming and air pollution is accentuating the need for environmental assessments in the construction industry. Promptly evaluating the environmental loads of the various design alternatives during the early stages of a construction project and adopting the most environmentally sustainable candidate is therefore of large importance. Yet, research on the early evaluation of a construction project's environmental load in order to aid the decision making process is hitherto lacking. In light of this dilemma, this study proposes a model for estimating the environmental load by employing only the most basic information accessible during the early design phases of a project for the pre-stressed concrete (PSC) beam bridge, the most common bridge structure. Firstly, a life cycle assessment (LCA) was conducted on the data from 99 bridges by integrating the bills of quantities (BOQ) with a life cycle inventory (LCI) database. The processed data was then utilized to construct a case based reasoning (CBR) model for estimating the environmental load. The accuracy of the estimation model was then validated using five test cases; the model's mean absolute error rates (MAER) for the total environmental load was calculated as 7.09%. Such test results were shown to be superior compared to those obtained from a multiple-regression based model and a slab area base-unit analysis model. Henceforth application of this model during the early stages of a project is expected to highly complement environmentally friendly designs and construction by facilitating the swift evaluation of the environmental load from multiple standpoints. - Highlights: • This study is to develop the model of assessing the environmental impacts on LCA. • Bills of quantity from completed designs of PSC Beam were linked with the LCI DB. • Previous cases were used to estimate the environmental load of new case by CBR model. • CBR
Energy Technology Data Exchange (ETDEWEB)
Kim, Kyong Ju, E-mail: kjkim@cau.ac.kr; Yun, Won Gun, E-mail: ogun78@naver.com; Cho, Namho, E-mail: nhc51@cau.ac.kr; Ha, Jikwang, E-mail: wlrhkd29@gmail.com
2017-05-15
The late rise in global concern for environmental issues such as global warming and air pollution is accentuating the need for environmental assessments in the construction industry. Promptly evaluating the environmental loads of the various design alternatives during the early stages of a construction project and adopting the most environmentally sustainable candidate is therefore of large importance. Yet, research on the early evaluation of a construction project's environmental load in order to aid the decision making process is hitherto lacking. In light of this dilemma, this study proposes a model for estimating the environmental load by employing only the most basic information accessible during the early design phases of a project for the pre-stressed concrete (PSC) beam bridge, the most common bridge structure. Firstly, a life cycle assessment (LCA) was conducted on the data from 99 bridges by integrating the bills of quantities (BOQ) with a life cycle inventory (LCI) database. The processed data was then utilized to construct a case based reasoning (CBR) model for estimating the environmental load. The accuracy of the estimation model was then validated using five test cases; the model's mean absolute error rates (MAER) for the total environmental load was calculated as 7.09%. Such test results were shown to be superior compared to those obtained from a multiple-regression based model and a slab area base-unit analysis model. Henceforth application of this model during the early stages of a project is expected to highly complement environmentally friendly designs and construction by facilitating the swift evaluation of the environmental load from multiple standpoints. - Highlights: • This study is to develop the model of assessing the environmental impacts on LCA. • Bills of quantity from completed designs of PSC Beam were linked with the LCI DB. • Previous cases were used to estimate the environmental load of new case by CBR model. • CBR
Kerboua, Kaouther; Hamdaoui, Oualid
2018-01-01
Based on two different assumptions regarding the equation describing the state of the gases within an acoustic cavitation bubble, this paper studies the sonochemical production of hydrogen, through two numerical models treating the evolution of a chemical mechanism within a single bubble saturated with oxygen during an oscillation cycle in water. The first approach is built on an ideal gas model, while the second one is founded on Van der Waals equation, and the main objective was to analyze the effect of the considered state equation on the ultrasonic hydrogen production retrieved by simulation under various operating conditions. The obtained results show that even when the second approach gives higher values of temperature, pressure and total free radicals production, yield of hydrogen does not follow the same trend. When comparing the results released by both models regarding hydrogen production, it was noticed that the ratio of the molar amount of hydrogen is frequency and acoustic amplitude dependent. The use of Van der Waals equation leads to higher quantities of hydrogen under low acoustic amplitude and high frequencies, while employing ideal gas law based model gains the upper hand regarding hydrogen production at low frequencies and high acoustic amplitudes. Copyright © 2017 Elsevier B.V. All rights reserved.
Hamim, Salah Uddin Ahmed
Nanoindentation involves probing a hard diamond tip into a material, where the load and the displacement experienced by the tip is recorded continuously. This load-displacement data is a direct function of material's innate stress-strain behavior. Thus, theoretically it is possible to extract mechanical properties of a material through nanoindentation. However, due to various nonlinearities associated with nanoindentation the process of interpreting load-displacement data into material properties is difficult. Although, simple elastic behavior can be characterized easily, a method to characterize complicated material behavior such as nonlinear viscoelasticity is still lacking. In this study, a nanoindentation-based material characterization technique is developed to characterize soft materials exhibiting nonlinear viscoelasticity. Nanoindentation experiment was modeled in finite element analysis software (ABAQUS), where a nonlinear viscoelastic behavior was incorporated using user-defined subroutine (UMAT). The model parameters were calibrated using a process called inverse analysis. In this study, a surrogate model-based approach was used for the inverse analysis. The different factors affecting the surrogate model performance are analyzed in order to optimize the performance with respect to the computational cost.
International Nuclear Information System (INIS)
Woods, T.
1991-02-01
The Hydrocarbon Supply Model is used to develop long-term trends in Lower-48 gas production and costs. The model utilizes historical find-rate patterns to predict the discovery rate and size distribution of future oil and gas field discoveries. The report documents the methodologies used to quantify historical oil and gas field find-rates and to project those discovery patterns for future drilling. It also explains the theoretical foundations for the find-rate approach. The new field and reserve growth resource base is documented and compared to other published estimates. The report has six sections. Section 1 provides background information and an overview of the model. Sections 2, 3, and 4 describe the theoretical foundations of the model, the databases, and specific techniques used. Section 5 presents the new field resource base by region and depth. Section 6 documents the reserve growth model components
Enhancement of regional wet deposition estimates based on modeled precipitation inputs
James A. Lynch; Jeffery W. Grimm; Edward S. Corbett
1996-01-01
Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....
Bhattarai, N.; Jain, M.; Mallick, K.
2017-12-01
A remote sensing based multi-model evapotranspiration (ET) estimation framework is developed using MODIS and NASA Merra-2 reanalysis data for data poor regions, and we apply this framework to the Indian subcontinent. The framework eliminates the need for in-situ calibration data and hence estimates ET completely from space and is replicable across all regions in the world. Currently, six surface energy balance models ranging from widely-used SEBAL, METRIC, and SEBS to moderately-used S-SEBI, SSEBop, and a relatively new model, STIC1.2 are being integrated and validated. Preliminary analysis suggests good predictability of the models for estimating near- real time ET under clear sky conditions from various crop types in India with coefficient of determination 0.32-0.55 and percent bias -15%-28%, when compared against Bowen Ratio based ET estimates. The results are particularly encouraging given that no direct ground input data were used in the analysis. The framework is currently being extended to estimate seasonal ET across the Indian subcontinent using a model-ensemble approach that uses all available MODIS 8-day datasets since 2000. These ET products are being used to monitor inter-seasonal and inter-annual dynamics of ET and crop water use across different crop and irrigation practices in India. Particularly, the potential impacts of changes in precipitation patterns and extreme heat (e.g., extreme degree days) on seasonal crop water consumption is being studied. Our ET products are able to locate the water stress hotspots that need to be targeted with water saving interventions to maintain agricultural production in the face of climate variability and change.
Energy Technology Data Exchange (ETDEWEB)
Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar [Bhabha Atomic Research Centre (BARC), Mumbai (India). Reactor Safety Div.
2016-12-15
During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.
International Nuclear Information System (INIS)
Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar
2016-01-01
During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.
Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.
2015-12-01
Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.
Mesoscopic modeling and parameter estimation of a lithium-ion battery based on LiFePO4/graphite
Jokar, Ali; Désilets, Martin; Lacroix, Marcel; Zaghib, Karim
2018-03-01
A novel numerical model for simulating the behavior of lithium-ion batteries based on LiFePO4(LFP)/graphite is presented. The model is based on the modified Single Particle Model (SPM) coupled to a mesoscopic approach for the LFP electrode. The model comprises one representative spherical particle as the graphite electrode, and N LFP units as the positive electrode. All the SPM equations are retained to model the negative electrode performance. The mesoscopic model rests on non-equilibrium thermodynamic conditions and uses a non-monotonic open circuit potential for each unit. A parameter estimation study is also carried out to identify all the parameters needed for the model. The unknown parameters are the solid diffusion coefficient of the negative electrode (Ds,n), reaction-rate constant of the negative electrode (Kn), negative and positive electrode porosity (εn&εn), initial State-Of-Charge of the negative electrode (SOCn,0), initial partial composition of the LFP units (yk,0), minimum and maximum resistance of the LFP units (Rmin&Rmax), and solution resistance (Rcell). The results show that the mesoscopic model can simulate successfully the electrochemical behavior of lithium-ion batteries at low and high charge/discharge rates. The model also describes adequately the lithiation/delithiation of the LFP particles, however, it is computationally expensive compared to macro-based models.
International Nuclear Information System (INIS)
Wei, Zhongbao; Lim, Tuti Mariana; Skyllas-Kazacos, Maria; Wai, Nyunt; Tseng, King Jet
2016-01-01
Highlights: • Battery model parameters and SOC co-estimation is investigated. • The model parameters and OCV are decoupled and estimated independently. • Multiple timescales are adopted to improve precision and stability. • SOC is online estimated without using the open-circuit cell. • The method is robust to aging levels, flow rates, and battery chemistries. - Abstract: A key function of battery management system (BMS) is to provide accurate information of the state of charge (SOC) in real time, and this depends directly on the precise model parameterization. In this paper, a novel multi-timescale estimator is proposed to estimate the model parameters and SOC for vanadium redox flow battery (VRB) in real time. The model parameters and OCV are decoupled and estimated independently, effectively avoiding the possibility of cross interference between them. The analysis of model sensitivity, stability, and precision suggests the necessity of adopting different timescales for each estimator independently. Experiments are conducted to assess the performance of the proposed method. Results reveal that the model parameters are online adapted accurately thus the periodical calibration on them can be avoided. The online estimated terminal voltage and SOC are both benchmarked with the reference values. The proposed multi-timescale estimator has the merits of fast convergence, high precision, and good robustness against the initialization uncertainty, aging states, flow rates, and also battery chemistries.
An, Guohua; Widness, John A; Mock, Donald M; Veng-Pedersen, Peter
2016-09-01
Direct measurement of red blood cell (RBC) survival in humans has improved from the original accurate but limited differential agglutination technique to the current reliable, safe, and accurate biotin method. Despite this, all of these methods are time consuming and require blood sampling over several months to determine the RBC lifespan. For situations in which RBC survival information must be obtained quickly, these methods are not suitable. With the exception of adults and infants, RBC survival has not been extensively investigated in other age groups. To address this need, we developed a novel, physiology-based mathematical model that quickly estimates RBC lifespan in healthy individuals at any age. The model is based on the assumption that the total number of RBC recirculations during the lifespan of each RBC (denoted by N max) is relatively constant for all age groups. The model was initially validated using the data from our prior infant and adult biotin-labeled red blood cell studies and then extended to the other age groups. The model generated the following estimated RBC lifespans in 2-year-old, 5-year-old, 8-year-old, and 10-year-old children: 62, 74, 82, and 86 days, respectively. We speculate that this model has useful clinical applications. For example, HbA1c testing is not reliable in identifying children with diabetes because HbA1c is directly affected by RBC lifespan. Because our model can estimate RBC lifespan in children at any age, corrections to HbA1c values based on the model-generated RBC lifespan could improve diabetes diagnosis as well as therapy in children.
Energy Technology Data Exchange (ETDEWEB)
Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)
2009-08-15
The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)
Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi
2014-01-01
The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.
Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.
2014-02-01
Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
International Nuclear Information System (INIS)
Chagas Moura, Márcio das; Azevedo, Rafael Valença; Droguett, Enrique López; Chaves, Leandro Rego; Lins, Isis Didier
2016-01-01
Occupational accidents pose several negative consequences to employees, employers, environment and people surrounding the locale where the accident takes place. Some types of accidents correspond to low frequency-high consequence (long sick leaves) events, and then classical statistical approaches are ineffective in these cases because the available dataset is generally sparse and contain censored recordings. In this context, we propose a Bayesian population variability method for the estimation of the distributions of the rates of accident and recovery. Given these distributions, a Markov-based model will be used to estimate the uncertainty over the expected number of accidents and the work time loss. Thus, the use of Bayesian analysis along with the Markov approach aims at investigating future trends regarding occupational accidents in a workplace as well as enabling a better management of the labor force and prevention efforts. One application example is presented in order to validate the proposed approach; this case uses available data gathered from a hydropower company in Brazil. - Highlights: • This paper proposes a Bayesian method to estimate rates of accident and recovery. • The model requires simple data likely to be available in the company database. • These results show the proposed model is not too sensitive to the prior estimates.
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
Tripathi, Vinay S.
2017-01-01
This paper estimates changes in the energy return on investment (EROI) for five large petroleum fields over time using the Oil Production Greenhouse Gas Emissions Estimator (OPGEE). The modeled fields include Cantarell (Mexico), Forties (U.K.), Midway-Sunset (U.S.), Prudhoe Bay (U.S.), and Wilmington (U.S.). Data on field properties and production/processing parameters were obtained from a combination of government and technical literature sources. Key areas of uncertainty include details of the oil and gas surface processing schemes. We aim to explore how long-term trends in depletion at major petroleum fields change the effective energetic productivity of petroleum extraction. Four EROI ratios are estimated for each field as follows: The net energy ratio (NER) and external energy ratio (EER) are calculated, each using two measures of energy outputs, (1) oil-only and (2) all energy outputs. In all cases, engineering estimates of inputs are used rather than expenditure-based estimates (including off-site indirect energy use and embodied energy). All fields display significant declines in NER over the modeling period driven by a combination of (1) reduced petroleum production and (2) increased energy expenditures on recovery methods such as the injection of water, steam, or gas. The fields studied had NER reductions ranging from 46% to 88% over the modeling periods (accounting for all energy outputs). The reasons for declines in EROI differ by field. Midway-Sunset experienced a 5-fold increase in steam injected per barrel of oil produced. In contrast, Prudhoe Bay has experienced nearly a 30-fold increase in amount of gas processed and reinjected per unit of oil produced. In contrast, EER estimates are subject to greater variability and uncertainty due to the relatively small magnitude of external energy investments in most cases. PMID:28178318
Tripathi, Vinay S; Brandt, Adam R
2017-01-01
This paper estimates changes in the energy return on investment (EROI) for five large petroleum fields over time using the Oil Production Greenhouse Gas Emissions Estimator (OPGEE). The modeled fields include Cantarell (Mexico), Forties (U.K.), Midway-Sunset (U.S.), Prudhoe Bay (U.S.), and Wilmington (U.S.). Data on field properties and production/processing parameters were obtained from a combination of government and technical literature sources. Key areas of uncertainty include details of the oil and gas surface processing schemes. We aim to explore how long-term trends in depletion at major petroleum fields change the effective energetic productivity of petroleum extraction. Four EROI ratios are estimated for each field as follows: The net energy ratio (NER) and external energy ratio (EER) are calculated, each using two measures of energy outputs, (1) oil-only and (2) all energy outputs. In all cases, engineering estimates of inputs are used rather than expenditure-based estimates (including off-site indirect energy use and embodied energy). All fields display significant declines in NER over the modeling period driven by a combination of (1) reduced petroleum production and (2) increased energy expenditures on recovery methods such as the injection of water, steam, or gas. The fields studied had NER reductions ranging from 46% to 88% over the modeling periods (accounting for all energy outputs). The reasons for declines in EROI differ by field. Midway-Sunset experienced a 5-fold increase in steam injected per barrel of oil produced. In contrast, Prudhoe Bay has experienced nearly a 30-fold increase in amount of gas processed and reinjected per unit of oil produced. In contrast, EER estimates are subject to greater variability and uncertainty due to the relatively small magnitude of external energy investments in most cases.
Directory of Open Access Journals (Sweden)
Vinay S Tripathi
Full Text Available This paper estimates changes in the energy return on investment (EROI for five large petroleum fields over time using the Oil Production Greenhouse Gas Emissions Estimator (OPGEE. The modeled fields include Cantarell (Mexico, Forties (U.K., Midway-Sunset (U.S., Prudhoe Bay (U.S., and Wilmington (U.S.. Data on field properties and production/processing parameters were obtained from a combination of government and technical literature sources. Key areas of uncertainty include details of the oil and gas surface processing schemes. We aim to explore how long-term trends in depletion at major petroleum fields change the effective energetic productivity of petroleum extraction. Four EROI ratios are estimated for each field as follows: The net energy ratio (NER and external energy ratio (EER are calculated, each using two measures of energy outputs, (1 oil-only and (2 all energy outputs. In all cases, engineering estimates of inputs are used rather than expenditure-based estimates (including off-site indirect energy use and embodied energy. All fields display significant declines in NER over the modeling period driven by a combination of (1 reduced petroleum production and (2 increased energy expenditures on recovery methods such as the injection of water, steam, or gas. The fields studied had NER reductions ranging from 46% to 88% over the modeling periods (accounting for all energy outputs. The reasons for declines in EROI differ by field. Midway-Sunset experienced a 5-fold increase in steam injected per barrel of oil produced. In contrast, Prudhoe Bay has experienced nearly a 30-fold increase in amount of gas processed and reinjected per unit of oil produced. In contrast, EER estimates are subject to greater variability and uncertainty due to the relatively small magnitude of external energy investments in most cases.
El Gharamti, Mohamad
2015-11-26
The ensemble Kalman filter (EnKF) recursively integrates field data into simulation models to obtain a better characterization of the model’s state and parameters. These are generally estimated following a state-parameters joint augmentation strategy. In this study, we introduce a new smoothing-based joint EnKF scheme, in which we introduce a one-step-ahead smoothing of the state before updating the parameters. Numerical experiments are performed with a two-dimensional synthetic subsurface contaminant transport model. The improved performance of the proposed joint EnKF scheme compared to the standard joint EnKF compensates for the modest increase in the computational cost.
He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.
Random Decrement Based FRF Estimation
DEFF Research Database (Denmark)
Brincker, Rune; Asmussen, J. C.
to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...
Random Decrement Based FRF Estimation
DEFF Research Database (Denmark)
Brincker, Rune; Asmussen, J. C.
1997-01-01
to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...
A Novel Methodology to Estimate Metabolic Flux Distributions in Constraint-Based Models
Directory of Open Access Journals (Sweden)
Francesco Alessandro Massucci
2013-09-01
Full Text Available Quite generally, constraint-based metabolic flux analysis describes the space of viable flux configurations for a metabolic network as a high-dimensional polytope defined by the linear constraints that enforce the balancing of production and consumption fluxes for each chemical species in the system. In some cases, the complexity of the solution space can be reduced by performing an additional optimization, while in other cases, knowing the range of variability of fluxes over the polytope provides a sufficient characterization of the allowed configurations. There are cases, however, in which the thorough information encoded in the individual distributions of viable fluxes over the polytope is required. Obtaining such distributions is known to be a highly challenging computational task when the dimensionality of the polytope is sufficiently large, and the problem of developing cost-effective ad hoc algorithms has recently seen a major surge of interest. Here, we propose a method that allows us to perform the required computation heuristically in a time scaling linearly with the number of reactions in the network, overcoming some limitations of similar techniques employed in recent years. As a case study, we apply it to the analysis of the human red blood cell metabolic network, whose solution space can be sampled by different exact techniques, like Hit-and-Run Monte Carlo (scaling roughly like the third power of the system size. Remarkably accurate estimates for the true distributions of viable reaction fluxes are obtained, suggesting that, although further improvements are desirable, our method enhances our ability to analyze the space of allowed configurations for large biochemical reaction networks.
International Nuclear Information System (INIS)
Belyazid, Salim; Kurz, Dani; Braun, Sabine; Sverdrup, Harald; Rihm, Beat; Hettelingh, Jean-Paul
2011-01-01
A dynamic model of forest ecosystems was used to investigate the effects of climate change, atmospheric deposition and harvest intensity on 48 forest sites in Sweden (n = 16) and Switzerland (n = 32). The model was used to investigate the feasibility of deriving critical loads for nitrogen (N) deposition based on changes in plant community composition. The simulations show that climate and atmospheric deposition have comparably important effects on N mobilization in the soil, as climate triggers the release of organically bound nitrogen stored in the soil during the elevated deposition period. Climate has the most important effect on plant community composition, underlining the fact that this cannot be ignored in future simulations of vegetation dynamics. Harvest intensity has comparatively little effect on the plant community in the long term, while it may be detrimental in the short term following cutting. This study shows: that critical loads of N deposition can be estimated using the plant community as an indicator; that future climatic changes must be taken into account; and that the definition of the reference deposition is critical for the outcome of this estimate. - Research highlights: → Plant community changes can be used to estimate critical loads of nitrogen. → Climate change is decisive for future changes of geochemistry and plant communities. → Climate change cannot be ignored in estimates of critical loads. → The model ForSAFE-Veg was successfully used to set critical loads of nitrogen. - Plant community composition can be used in dynamic modelling to estimate critical loads of nitrogen deposition, provided the appropriate reference deposition, future climate and target plant communities are defined.
A model-based initial guess for estimating parameters in systems of ordinary differential equations.
Dattner, Itai
2015-12-01
The inverse problem of parameter estimation from noisy observations is a major challenge in statistical inference for dynamical systems. Parameter estimation is usually carried out by optimizing some criterion function over the parameter space. Unless the optimization process starts with a good initial guess, the estimation may take an unreasonable amount of time, and may converge to local solutions, if at all. In this article, we introduce a novel technique for generating good initial guesses that can be used by any estimation method. We focus on the fairly general and often applied class of systems linear in the parameters. The new methodology bypasses numerical integration and can handle partially observed systems. We illustrate the performance of the method using simulations and apply it to real data. © 2015, The International Biometric Society.
Schur, Nadine; Hürlimann, Eveline; Garba, Amadou; Traoré, Mamadou S.; Ndir, Omar; Ratard, Raoult C.; Tchuem Tchuenté, Louis-Albert; Kristensen, Thomas K.; Utzinger, Jürg; Vounatsou, Penelope
2011-01-01
Background Schistosomiasis is a water-based disease that is believed to affect over 200 million people with an estimated 97% of the infections concentrated in Africa. However, these statistics are largely based on population re-adjusted data originally published by Utroska and colleagues more than 20 years ago. Hence, these estimates are outdated due to large-scale preventive chemotherapy programs, improved sanitation, water resources development and management, among other reasons. For planning, coordination, and evaluation of control activities, it is essential to possess reliable schistosomiasis prevalence maps. Methodology We analyzed survey data compiled on a newly established open-access global neglected tropical diseases database (i) to create smooth empirical prevalence maps for Schistosoma mansoni and S. haematobium for individuals aged ≤20 years in West Africa, including Cameroon, and (ii) to derive country-specific prevalence estimates. We used Bayesian geostatistical models based on environmental predictors to take into account potential clustering due to common spatially structured exposures. Prediction at unobserved locations was facilitated by joint kriging. Principal Findings Our models revealed that 50.8 million individuals aged ≤20 years in West Africa are infected with either S. mansoni, or S. haematobium, or both species concurrently. The country prevalence estimates ranged between 0.5% (The Gambia) and 37.1% (Liberia) for S. mansoni, and between 17.6% (The Gambia) and 51.6% (Sierra Leone) for S. haematobium. We observed that the combined prevalence for both schistosome species is two-fold lower in Gambia than previously reported, while we found an almost two-fold higher estimate for Liberia (58.3%) than reported before (30.0%). Our predictions are likely to overestimate overall country prevalence, since modeling was based on children and adolescents up to the age of 20 years who are at highest risk of infection. Conclusion/Significance We
Changes in Nature's Balance Sheet: Model-based Estimates of Future Worldwide Ecosystem Services
Directory of Open Access Journals (Sweden)
Joseph Alcamo
2005-12-01
Full Text Available Four quantitative scenarios are presented that describe changes in worldwide ecosystem services up to 2050-2100. A set of soft-linked global models of human demography, economic development, climate, and biospheric processes are used to quantify these scenarios. The global demand for ecosystem services substantially increases up to 2050: cereal consumption by a factor of 1.5 to 1.7, fish consumption (up to the 2020s by a factor of 1.3 to 1.4, water withdrawals by a factor of 1.3 to 2.0, and biofuel production by a factor of 5.1 to 11.3. The ranges for these estimates reflect differences between the socio-economic assumptions of the scenarios. In all simulations, Sub-Saharan Africa continues to lag behind other parts of the world. Although the demand side of these scenarios presents an overall optimistic view of the future, the supply side is less optimistic: the risk of higher soil erosion (especially in Sub-Saharan Africa and lower water availability (especially in the Middle East could slow down an increase in food production. Meanwhile, increasing wastewater discharges during the same period, especially in Latin America (factor of 2 to 4 and Sub-Saharan Africa (factor of 3.6 to 5.6 could interfere with the delivery of freshwater services. Marine fisheries (despite the growth of aquaculture may not have the ecological capacity to provide for the increased global demand for fish. Our simulations also show an intensification of present tradeoffs between ecosystem services, e.g., expansion of agricultural land (between 2000 and 2050 may be one of the main causes of a 10%-20% loss of total current grassland and forest land and the ecosystem services associated with this land (e.g., genetic resources, wood production, habitat for terrestrial biota and fauna. The scenarios also show that certain hot-spot regions may experience especially rapid changes in ecosystem services: the central part of Africa, southern Asia, and the Middle East. In general
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
Model-based Estimation of Gas Leakage for Fluid Power Accumulators in Wind Turbines
DEFF Research Database (Denmark)
Liniger, Jesper; Pedersen, Henrik Clemmensen; N. Soltani, Mohsen
2017-01-01
for accumulators, namely gas leakage. The method utilizes an Extended Kalman Filter for joint state and parameter estimation with special attention to limiting the use of sensors to those commonly used in wind turbines. The precision of the method is investigated on an experimental setup which allows for operation...... of the accumulator similar to the conditions in a turbine. The results show that gas leakage is indeed detectable during start-up of the turbine and robust behavior is achieved in a multi-fault environment where both gas and external fluid leakage occur simultaneously. The estimation precision is shown...... to be sensitive to initial conditions for the gas temperature and volume....
Biondi, Daniela; De Luca, Davide Luciano
2015-04-01
The use of rainfall-runoff models represents an alternative to statistical approaches (such as at-site or regional flood frequency analysis) for design flood estimation, and constitutes an answer to the increasing need for synthetic design hydrographs (SDHs) associated to a specific return period. However, the lack of streamflow observations and the consequent high uncertainty associated with parameter estimation, usually pose serious limitations to the use of process-based approaches in ungauged catchments, which in contrast represent the majority in practical applications. This work presents the application of a Bayesian procedure that, for a predefined rainfall-runoff model, allows for the assessment of posterior parameters distribution, using the limited and uncertain information available for the response of an ungauged catchment (Bulygina et al. 2009; 2011). The use of regional estimates of river flow statistics, interpreted as hydrological signatures that measure theoretically relevant system process behaviours (Gupta et al. 2008), within this framework represents a valuable option and has shown significant developments in recent literature to constrain the plausible model response and to reduce the uncertainty in ungauged basins. In this study we rely on the first three L-moments of annual streamflow maxima, for which regressions are available from previous studies (Biondi et al. 2012; Laio et al. 2011). The methodology was carried out for a catchment located in southern Italy, and used within a Monte Carlo scheme (MCs) considering both event-based and continuous simulation approaches for design flood estimation. The applied procedure offers promising perspectives to perform model calibration and uncertainty analysis in ungauged basins; moreover, in the context of design flood estimation, process-based methods coupled with MCs approach have the advantage of providing simulated floods uncertainty analysis that represents an asset in risk-based decision
Directory of Open Access Journals (Sweden)
Rulin Huang
2017-04-01
Full Text Available Existing collision avoidance methods for autonomous vehicles, which ignore the driving intent of detected vehicles, thus, cannot satisfy the requirements for autonomous driving in urban environments because of their high false detection rates of collisions with vehicles on winding roads and the missed detection rate of collisions with maneuvering vehicles. This study introduces an intent-estimation- and motion-model-based (IEMMB method to address these disadvantages. First, a state vector is constructed by combining the road structure and the moving state of detected vehicles. A Gaussian mixture model is used to learn the maneuvering patterns of vehicles from collected data, and the patterns are used to estimate the driving intent of the detected vehicles. Then, a desirable long-term trajectory is obtained by weighting time and comfort. The long-term trajectory and the short-term trajectory, which are predicted using a constant yaw rate motion model, are fused to achieve an accurate trajectory. Finally, considering the moving state of the autonomous vehicle, collisions can be detected and avoided. Experiments have shown that the intent estimation method performed well, achieving an accuracy of 91.7% on straight roads and an accuracy of 90.5% on winding roads, which is much higher than that achieved by the method that ignores the road structure. The average collision detection distance is increased by more than 8 m. In addition, the maximum yaw rate and acceleration during an evasive maneuver are decreased, indicating an improvement in the driving comfort.
Eitelberg, D.A.; van Vliet, J.; Verburg, P.H.
2015-01-01
The world's population is growing and demand for food, feed, fiber, and fuel is increasing, placing greater demand on land and its resources for crop production. We review previously published estimates of global scale cropland availability, discuss the underlying assumptions that lead to
Model-based PSF and MTF estimation and validation from skeletal clinical CT images.
Pakdel, Amirreza; Mainprize, James G; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M
2014-01-01
A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the scanner-specific parameters.
Model-based PSF and MTF estimation and validation from skeletal clinical CT images
International Nuclear Information System (INIS)
Pakdel, Amirreza; Mainprize, James G.; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M.
2014-01-01
Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the
Model-based PSF and MTF estimation and validation from skeletal clinical CT images
Energy Technology Data Exchange (ETDEWEB)
Pakdel, Amirreza [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Mainprize, James G.; Robert, Normand [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Fialkov, Jeffery [Division of Plastic Surgery, Sunnybrook Health Sciences Center, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Whyne, Cari M., E-mail: cari.whyne@sunnybrook.ca [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada)
2014-01-15
Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge
Duan, Z.; Bastiaanssen, W.G.M.
2017-01-01
The heat storage changes (Qt) can be a significant component of the energy balance in lakes, and it is important to account for Qt for reasonable estimation of evaporation at monthly and finer timescales if the energy balance-based evaporation models are used. However, Qt has been often neglected in
International Nuclear Information System (INIS)
Ding, Y.; Arai, K.
2007-01-01
A method for estimation of forest parameters, species, tree shape, distance between canopies by means of Monte-Carlo based radiative transfer model with forestry surface model is proposed. The model is verified through experiments with the miniature model of forest, tree array of relatively small size of trees. Two types of miniature trees, ellipse-looking and cone-looking canopy are examined in the experiments. It is found that the proposed model and experimental results show a coincidence so that the proposed method is validated. It is also found that estimation of tree shape, trunk tree distance as well as distinction between deciduous or coniferous trees can be done with the proposed model. Furthermore, influences due to multiple reflections between trees and interaction between trees and under-laying grass are clarified with the proposed method
Energy Technology Data Exchange (ETDEWEB)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com [Physics Department, Federal University of Santa Catarina, Florianópolis (Brazil); Grana, Dario [Department of Geology and Geophysics, University of Wyoming, Laramie (United States); Santos, Marcio; Figueiredo, Wagner [Physics Department, Federal University of Santa Catarina, Florianópolis (Brazil); Roisenberg, Mauro [Informatic and Statistics Department, Federal University of Santa Catarina, Florianópolis (Brazil); Schwedersky Neto, Guenther [Petrobras Research Center, Rio de Janeiro (Brazil)
2017-05-01
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well data multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.
Dranitsaris, George; Ortega, Ana; Lubbe, Martie S; Truter, Ilse
2012-03-01
Several European governments have recently mandated price cuts in drugs to reduce health care spending. However, such measures without supportive evidence may compromise patient care because manufacturers may withdraw current products or not launch new agents. A value-based pricing scheme may be a better approach for determining a fair drug price and may be a medium for negotiations between the key stakeholders. To demonstrate this approach, pharmacoeconomic (PE) modeling was used from the Spanish health care system perspective to estimate a value-based price for bevacizumab, a drug that provides a 1.4-month survival benefit to patients with metastatic colorectal cancer (mCRC). The threshold used for economic value was three times the Spanish per capita GDP, as recommended by the World Health Organization (WHO). A PE model was developed to simulate outcomes in mCRC patients receiving chemotherapy ± bevacizumab. Clinical data were obtained from randomized trials and costs from a Spanish hospital. Utility estimates were determined by interviewing 24 Spanish oncology nurses and pharmacists. A price per dose of bevacizumab was then estimated using a target threshold of € 78,300 per quality-adjusted life year gained, which is three times the Spanish per capita GDP. For a 1.4-month survival benefit, a price of € 342 per dose would be considered cost effective from the Spanish public health care perspective. The price may be increased to € 733 or € 843 per dose if the drug were able to improve patient quality of life or enhance survival from 1.4 to 3 months. This study demonstrated that a value-based pricing approach using PE modeling and the WHO criteria for economic value is feasible and perhaps a better alternative to government mandated price cuts. The former approach would be a good starting point for opening dialog between European government payers and the pharmaceutical industry.
Directory of Open Access Journals (Sweden)
Kul Khand
2017-11-01
Full Text Available Agricultural subsurface drainage changes the field hydrology and potentially the amount of water available to the crop by altering the flow path and the rate and timing of water removal. Evapotranspiration (ET is normally among the largest components of the field water budget, and the changes in ET from the introduction of subsurface drainage are likely to have a greater influence on the overall water yield (surface runoff plus subsurface drainage from subsurface drained (TD fields compared to fields without subsurface drainage (UD. To test this hypothesis, we examined the impact of subsurface drainage on ET at two sites located in the Upper Midwest (North Dakota-Site 1 and South Dakota-Site 2 using the Landsat imagery-based METRIC (Mapping Evapotranspiration at high Resolution with Internalized Calibration model. Site 1 was planted with corn (Zea mays L. and soybean (Glycine max L. during the 2009 and 2010 growing seasons, respectively. Site 2 was planted with corn for the 2013 growing season. During the corn growing seasons (2009 and 2013, differences between the total ET from TD and UD fields were less than 5 mm. For the soybean year (2010, ET from the UD field was 10% (53 mm greater than that from the TD field. During the peak ET period from June to September for all study years, ET differences from TD and UD fields were within 15 mm (<3%. Overall, differences between daily ET from TD and UD fields were not statistically significant (p > 0.05 and showed no consistent relationship.
Estimate of safe human exposure levels for lunar dust based on comparative benchmark dose modeling.
James, John T; Lam, Chiu-Wing; Santana, Patricia A; Scully, Robert R
2013-04-01
Brief exposures of Apollo astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure to lunar dust. The United States and other space faring nations intend to return to the moon for extensive exploration within a few decades. In the meantime, habitats for that exploration, whether mobile or fixed, must be designed to limit human exposure to lunar dust to safe levels. Herein we estimate safe exposure limits for lunar dust collected during the Apollo 14 mission. We instilled three respirable-sized (∼2 μ mass median diameter) lunar dusts (two ground and one unground) and two standard dusts of widely different toxicities (quartz and TiO₂) into the respiratory system of rats. Rats in groups of six were given 0, 1, 2.5 or 7.5 mg of the test dust in a saline-Survanta® vehicle, and biochemical and cellular biomarkers of toxicity in lung lavage fluid were assayed 1 week and one month after instillation. By comparing the dose--response curves of sensitive biomarkers, we estimated safe exposure levels for astronauts and concluded that unground lunar dust and dust ground by two different methods were not toxicologically distinguishable. The safe exposure estimates were 1.3 ± 0.4 mg/m³ (jet-milled dust), 1.0 ± 0.5 mg/m³ (ball-milled dust) and 0.9 ± 0.3 mg/m³ (unground, natural dust). We estimate that 0.5-1 mg/m³ of lunar dust is safe for periodic human exposures during long stays in habitats on the lunar surface.
Zhang, Yongguang; Guanter, Luis; Berry, Joseph A; Joiner, Joanna; van der Tol, Christiaan; Huete, Alfredo; Gitelson, Anatoly; Voigt, Maximilian; Köhler, Philipp
2014-12-01
Photosynthesis simulations by terrestrial biosphere models are usually based on the Farquhar's model, in which the maximum rate of carboxylation (Vcmax ) is a key control parameter of photosynthetic capacity. Even though Vcmax is known to vary substantially in space and time in response to environmental controls, it is typically parameterized in models with tabulated values associated to plant functional types. Remote sensing can be used to produce a spatially continuous and temporally resolved view on photosynthetic efficiency, but traditional vegetation observations based on spectral reflectance lack a direct link to plant photochemical processes. Alternatively, recent space-borne measurements of sun-induced chlorophyll fluorescence (SIF) can offer an observational constraint on photosynthesis simulations. Here, we show that top-of-canopy SIF measurements from space are sensitive to Vcmax at the ecosystem level, and present an approach to invert Vcmax from SIF data. We use the Soil-Canopy Observation of Photosynthesis and Energy (SCOPE) balance model to derive empirical relationships between seasonal Vcmax and SIF which are used to solve the inverse problem. We evaluate our Vcmax estimation method at six agricultural flux tower sites in the midwestern US using spaced-based SIF retrievals. Our Vcmax estimates agree well with literature values for corn and soybean plants (average values of 37 and 101 μmol m(-2) s(-1) , respectively) and show plausible seasonal patterns. The effect of the updated seasonally varying Vcmax parameterization on simulated gross primary productivity (GPP) is tested by comparing to simulations with fixed Vcmax values. Validation against flux tower observations demonstrate that simulations of GPP and light use efficiency improve significantly when our time-resolved Vcmax estimates from SIF are used, with R(2) for GPP comparisons increasing from 0.85 to 0.93, and for light use efficiency from 0.44 to 0.83. Our results support the use of
Estimating renal function in children: a new GFR-model based on serum cystatin C and body cell mass.
Andersen, Trine Borup
2012-07-01
This PhD thesis is based on four individual studies including 131 children aged 2-14 years with nephro-urologic disorders. The majority (72%) of children had a normal renal function (GFR > 82 ml/min/1.73 square metres), and only 8% had a renal function thesis´ main aims were: 1) to develop a more accurate GFR model based on a novel theory of body cell mass (BCM) and cystatin C (CysC); 2) to investigate the diagnostic performance in comparison to other models as well as serum CysC and creatinine; 3) to validate the new models precision and validity. The model´s diagnostic performance was investigated in study I as the ability to detect changes in renal function (total day-to-day variation), and in study IV as the ability to discriminate between normal and reduced function. The model´s precision and validity were indirectly evaluated in study II and III, and in study I accuracy was estimated by comparison to reference GFR. Several prediction models based on CysC or a combination of CysC and serum creatinine have been developed for predicting GFR in children. Despite these efforts to improve GFR estimates, no alternative to exogenous methods has been found and the Schwartz´s formula based on height, creatinine and an empirically derived constant is still recommended for GFR estimation in children. However, the inclusion of BCM as a possible variable in a CysC-based prediction model has not yet been explored. As CysC is produced at a constant rate from all nucleated cells we hypothesize that including BCM in a new prediction model will increase accuracy of the GFR estimate. Study I aimed at deriving the new GFR-prediction model based on the novel theory of CysC and BCM and comparing the performance to previously published models. The BCM-model took the form GFR (mL/min) = 10.2 × (BCM/CysC)E 0.40 × (height × body surface area/Crea)E 0.65. The model predicted 99% within ± 30% of reference GFR, and 67% within ±10%. This was higher than any other model. The
El Gharamti, Mohamad; Hoteit, Ibrahim
2014-01-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
El Gharamti, Mohamad
2014-02-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
Chai, Lilong; Kröbel, Roland; Janzen, H. Henry; Beauchemin, Karen A.; McGinn, Sean M.; Bittman, Shabtai; Atia, Atta; Edeogu, Ike; MacDonald, Douglas; Dong, Ruilan
2014-08-01
Animal feeding operations are primary contributors of anthropogenic ammonia (NH3) emissions in North America and Europe. Mathematical modeling of NH3 volatilization from each stage of livestock manure management allows comprehensive quantitative estimates of emission sources and nutrient losses. A regionally-specific mass balance model based on total ammoniacal nitrogen (TAN) content in animal manure was developed for estimating NH3 emissions from beef farming operations in western Canada. Total N excretion in urine and feces was estimated from animal diet composition, feed dry matter intake and N utilization for beef cattle categories and production stages. Mineralization of organic N, immobilization of TAN, nitrification, and denitrification of N compounds in manure, were incorporated into the model to account for quantities of TAN at each stage of manure handling. Ammonia emission factors were specified for different animal housing (feedlots, barns), grazing, manure storage (including composting and stockpiling) and land spreading (tilled and untilled land), and were modified for temperature. The model computed NH3 emissions from all beef cattle sub-classes including cows, calves, breeding bulls, steers for slaughter, and heifers for slaughter and replacement. Estimated NH3 emissions were about 1.11 × 105 Mg NH3 in Alberta in 2006, with a mean of 18.5 kg animal-1 yr-1 (15.2 kg NH3-N animal-1 yr-1) which is 23.5% of the annual N intake of beef cattle (64.7 kg animal-1 yr-1). The percentage of N intake volatilized as NH3-N was 50% for steers and heifers for slaughter, and between 11 and 14% for all other categories. Steers and heifers for slaughter were the two largest contributors (3.5 × 104 and 3.9 × 104 Mg, respectively) at 31.5 and 32.7% of total NH3 emissions because most growing animals were finished in feedlots. Animal housing and grazing contributed roughly 63% of the total NH3 emissions (feedlots, barns and pastures contributed 54.4, 0.2 and 8.1% of
Duan, Zheng; Bastiaanssen, W. G. M.
2017-02-01
The heat storage changes (Q t) can be a significant component of the energy balance in lakes, and it is important to account for Q t for reasonable estimation of evaporation at monthly and finer timescales if the energy balance-based evaporation models are used. However, Q t has been often neglected in many studies due to the lack of required water temperature data. A simple hysteresis model (Q t = a*Rn + b + c* dRn/dt) has been demonstrated to reasonably estimate Q t from the readily available net all wave radiation (Rn) and three locally calibrated coefficients (a-c) for lakes and reservoirs. As a follow-up study, we evaluated whether this hysteresis model could enable energy balance-based evaporation models to yield good evaporation estimates. The representative monthly evaporation data were compiled from published literature and used as ground-truth to evaluate three energy balance-based evaporation models for five lakes. The three models in different complexity are De Bruin-Keijman (DK), Penman, and a new model referred to as Duan-Bastiaanssen (DB). All three models require Q t as input. Each model was run in three scenarios differing in the input Q t (S1: measured Q t; S2: modelled Q t from the hysteresis model; S3: neglecting Q t) to evaluate the impact of Q t on the modelled evaporation. Evaluation showed that the modelled Q t agreed well with measured counterparts for all five lakes. It was confirmed that the hysteresis model with locally calibrated coefficients can predict Q t with good accuracy for the same lake. Using modelled Q t as inputs all three evaporation models yielded comparably good monthly evaporation to those using measured Q t as inputs and significantly better than those neglecting Q t for the five lakes. The DK model requiring minimum data generally performed the best, followed by the Penman and DB model. This study demonstrated that once three coefficients are locally calibrated using historical data the simple hysteresis model can offer
Directory of Open Access Journals (Sweden)
Alba Sandyra Bezerra Lopes
2012-01-01
Full Text Available The motion estimation is the most complex module in a video encoder requiring a high processing throughput and high memory bandwidth, mainly when the focus is high-definition videos. The throughput problem can be solved increasing the parallelism in the internal operations. The external memory bandwidth may be reduced using a memory hierarchy. This work presents a memory hierarchy model for a full-search motion estimation core. The proposed memory hierarchy model is based on a data reuse scheme considering the full search algorithm features. The proposed memory hierarchy expressively reduces the external memory bandwidth required for the motion estimation process, and it provides a very high data throughput for the ME core. This throughput is necessary to achieve real time when processing high-definition videos. When considering the worst bandwidth scenario, this memory hierarchy is able to reduce the external memory bandwidth in 578 times. A case study for the proposed hierarchy, using 32×32 search window and 8×8 block size, was implemented and prototyped on a Virtex 4 FPGA. The results show that it is possible to reach 38 frames per second when processing full HD frames (1920×1080 pixels using nearly 299 Mbytes per second of external memory bandwidth.
Estimating the costs of induced abortion in Uganda: A model-based analysis
2011-01-01
Background The demand for induced abortions in Uganda is high despite legal and moral proscriptions. Abortion seekers usually go to illegal, hidden clinics where procedures are performed in unhygienic environments by under-trained practitioners. These abortions, which are usually unsafe, lead to a high rate of severe complications and use of substantial, scarce healthcare resources. This study was performed to estimate the costs associated with induced abortions in Uganda. Methods A decision tree was developed to represent the consequences of induced abortion and estimate the costs of an average case. Data were obtained from a primary chart abstraction study, an on-going prospective study, and the published literature. Societal costs, direct medical costs, direct non-medical costs, indirect (productivity) costs, costs to patients, and costs to the government were estimated. Monte Carlo simulation was used to account for uncertainty. Results The average societal cost per induced abortion (95% credibility range) was $177 ($140-$223). This is equivalent to $64 million in annual national costs. Of this, the average direct medical cost was $65 ($49-86) and the average direct non-medical cost was $19 ($16-$23). The average indirect cost was $92 ($57-$139). Patients incurred $62 ($46-$83) on average while government incurred $14 ($10-$20) on average. Conclusion Induced abortions are associated with substantial costs in Uganda and patients incur the bulk of the healthcare costs. This reinforces the case made by other researchers--that efforts by the government to reduce unsafe abortions by increasing contraceptive coverage or providing safe, legal abortions are critical. PMID:22145859
Energy Technology Data Exchange (ETDEWEB)
Wang, Peijuan; Xie, Donghui; Zhou, Yuyu; E, Youhao; Zhu, Qijiang
2014-01-16
The ecological structure in the arid and semi-arid region of Northwest China with forest, grassland, agriculture, Gobi, and desert, is complex, vulnerable, and unstable. It is a challenging and sustaining job to keep the ecological structure and improve its ecological function. Net primary productivity (NPP) modeling can help to improve the understanding of the ecosystem, and therefore, improve ecological efficiency. The boreal ecosystem productivity simulator (BEPS) model provides the possibility of NPP modeling in terrestrial ecosystem, but it has some limitations for application in arid and semi-arid regions. In this paper we improve the BEPS model, in terms of its water cycle by adding the processes of infiltration and surface runoff, to be applicable in arid and semi-arid regions. We model the NPP of forest, grass, and crop in Gansu Province as an experimental area in Northwest China in 2003 using the improved BEPS model, parameterized with moderate resolution remote sensing imageries and meteorological data. The modeled NPP using improved BEPS agrees better with the ground measurements in Qilian Mountain than that with original BEPS, with a higher R2 of 0.746 and lower root mean square error (RMSE) of 46.53 gC/m2 compared to R2 of 0.662 and RMSE of 60.19 gC/m2 from original BEPS. The modeled NPP of three vegetation types using improved BEPS show evident differences compared to that using original BEPS, with the highest difference ratio of 9.21% in forest and the lowest value of 4.29% in crop. The difference ratios between different vegetation types lie on the dependence on natural water sources. The modeled NPP in five geographic zones using improved BEPS are higher than those with original BEPS, with higher difference ratio in dry zones and lower value in wet zones.
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)
2012-01-01
This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.
International Nuclear Information System (INIS)
Hongo, Shozo; Yamaguchi, Hiroshi; Takeshita, Hiroshi; Iwai, Satoshi.
1994-01-01
A computer program named IDES is developed by BASIC language for a personal computer and translated to C language of engineering work station. The IDES carries out internal dose calculations described in ICRP Publication 30 and it installs the program of transformation method which is an empirical method to estimate absorbed fractions of different physiques from ICRP Referenceman. The program consists of three tasks: productions of SAF for Japanese including children, productions of SEE, Specific Effective Energy, and calculation of effective dose equivalents. Each task and corresponding data file appear as a module so as to meet future requirement for revisions of the related data. Usefulness of IDES is discussed by exemplifying the case that 5 age groups of Japanese intake orally Co-60 or Mn-54. (author)
Directory of Open Access Journals (Sweden)
Andrew E. Suyker
2013-11-01
Full Text Available Remote sensing techniques that provide synoptic and repetitive observations over large geographic areas have become increasingly important in studying the role of agriculture in global carbon cycles. However, it is still challenging to model crop yields based on remotely sensed data due to the variation in radiation use efficiency (RUE across crop types and the effects of spatial heterogeneity. In this paper, we propose a production efficiency model-based method to estimate corn and soybean yields with MODerate Resolution Imaging Spectroradiometer (MODIS data by explicitly handling the following two issues: (1 field-measured RUE values for corn and soybean are applied to relatively pure pixels instead of the biome-wide RUE value prescribed in the MODIS vegetation productivity product (MOD17; and (2 contributions to productivity from vegetation other than crops in mixed pixels are deducted at the level of MODIS resolution. Our estimated yields statistically correlate with the national survey data for rainfed counties in the Midwestern US with low errors for both corn (R2 = 0.77; RMSE = 0.89 MT/ha and soybeans (R2 = 0.66; RMSE = 0.38 MT/ha. Because the proposed algorithm does not require any retrospective analysis that constructs empirical relationships between the reported yields and remotely sensed data, it could monitor crop yields over large areas.
1980-08-01
varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of
Spatial Rice Yield Estimation Based on MODIS and Sentinel-1 SAR Data and ORYZA Crop Growth Model
Directory of Open Access Journals (Sweden)
Tri D. Setiyono
2018-02-01
Full Text Available Crop insurance is a viable solution to reduce the vulnerability of smallholder farmers to risks from pest and disease outbreaks, extreme weather events, and market shocks that threaten their household food and income security. In developing and emerging countries, the implementation of area yield-based insurance, the form of crop insurance preferred by clients and industry, is constrained by the limited availability of detailed historical yield records. Remote-sensing technology can help to fill this gap by providing an unbiased and replicable source of the needed data. This study is dedicated to demonstrating and validating the methodology of remote sensing and crop growth model-based rice yield estimation with the intention of historical yield data generation for application in crop insurance. The developed system combines MODIS and SAR-based remote-sensing data to generate spatially explicit inputs for rice using a crop growth model. MODIS reflectance data were used to generate multitemporal LAI maps using the inverted Radiative Transfer Model (RTM. SAR data were used to generate rice area maps using MAPScape-RICE to mask LAI map products for further processing, including smoothing with logistic function and running yield simulation using the ORYZA crop growth model facilitated by the Rice Yield Estimation System (Rice-YES. Results from this study indicate that the approach of assimilating MODIS and SAR data into a crop growth model can generate well-adjusted yield estimates that adequately describe spatial yield distribution in the study area while reliably replicating official yield data with root mean square error, RMSE, of 0.30 and 0.46 t ha−1 (normalized root mean square error, NRMSE of 5% and 8% for the 2016 spring and summer seasons, respectively, in the Red River Delta of Vietnam, as evaluated at district level aggregation. The information from remote-sensing technology was also useful for identifying geographic locations with
DEFF Research Database (Denmark)
Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan
2015-01-01
Flammability data is needed to assess the risk of fire and explosions. This study presents a new group contribution (GC) model to predict the upper flammability limit UFL oforganic chemicals. Furthermore, it provides a systematic method for outlier treatment inorder to improve the parameter...
Potential human health risk from chemical exposure must often be assessed for conditions for which suitable human or animal data are not available, requiring extrapolation across duration and concentration. The default method for exposure-duration adjustment is based on Haber's r...
Shelley, Mack; Gonwa-Reeves, Christopher; Baenziger, Joan; Seefeld, Ashley; Hand, Brian; Therrien, William; Villanueva, Mary Grace; Taylor, Jonte
2012-01-01
The purpose of this paper is to examine the impact of implementation of the Science Writing Heuristic (SWH) approach at 5th grade level in the public school system in Iowa as measured by Cornell Critical Thinking student test scores. This is part of a project that overall tests the efficacy of the SWH inquiry-based approach to build students'…
Estimation of Financial Agent-Based Models with Simulated Maximum Likelihood
Czech Academy of Sciences Publication Activity Database
Kukačka, Jiří; Baruník, Jozef
2017-01-01
Roč. 85, č. 1 (2017), s. 21-45 ISSN 0165-1889 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : heterogeneous agent model, * simulated maximum likelihood * switching Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 1.000, year: 2016 http://library.utia.cas.cz/separaty/2017/E/kukacka-0478481.pdf
Assessing Breast Cancer Risk Estimates Based on the Gail Model and Its Predictors in Qatari Women.
Bener, Abdulbari; Çatan, Funda; El Ayoubi, Hanadi R; Acar, Ahmet; Ibrahim, Wanis H
2017-07-01
The Gail model is the most widely used breast cancer risk assessment tool. An accurate assessment of individual's breast cancer risk is very important for prevention of the disease and for the health care providers to make decision on taking chemoprevention for high-risk women in clinical practice in Qatar. To assess the breast cancer risk among Arab women population in Qatar using the Gail model and provide a global comparison of risk assessment. In this cross-sectional study of 1488 women (aged 35 years and older), we used the Gail Risk Assessment Tool to assess the risk of developing breast cancer. Sociodemographic features such as age, lifestyle habits, body mass index, breast-feeding duration, consanguinity among parents, and family history of breast cancer were considered as possible risks. The mean age of the study population was 47.8 ± 10.8 years. Qatari women and Arab women constituted 64.7% and 35.3% of the study population, respectively. The mean 5-year and lifetime breast cancer risks were 1.12 ± 0.52 and 10.57 ± 3.1, respectively. Consanguineous marriage among parents was seen in 30.6% of participants. We found a relationship between the 5-year and lifetime risks of breast cancer and variables such as age, age at menarche, gravidity, parity, body mass index, family history of cancer, menopause age, occupation, and level of education. The linear regression analysis identified the predictors for breast cancer in women such as age, age at menarche, age of first birth, family history and age of menopausal were considered the strong predictors and significant contributing risk factors for breast cancer after adjusting for ethnicity, parity and other variables. The current study is the first to evaluate the performance of the Gail model for Arab women population in the Gulf Cooperation Council. Gail model is an appropriate breast cancer risk assessment tool for female population in Qatar.
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J
2017-06-01
In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision
Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.
2012-12-01
The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.
Evaluation of 2 process-based models to estimate soil N{sub 2}O emissions in eastern Canada
Energy Technology Data Exchange (ETDEWEB)
Smith, W.N.; Grant, B.B.; Desjardins, R.L. [Agriculture and Agri-Food Canada, Ottawa, ON (Canada). Eastern Cereal and Oilseed Research Centre; Rochette, P. [Agriculture and Agri-Food Canada, Sainte-Foy, PQ (Canada); Drury, C.F. [Agriculture and Agri-Food Canada, Harrow, ON (Canada); Li, C. [New Hampshire Univ., Durham, NH (United States). Inst. for the Study of Earth, Oceans, and Space
2008-04-15
This study assessed the ability of 2 process-based nitrogen (N) models to accurately estimate nitrous oxide (N{sub 2}O) emissions and auxiliary soil and hydraulic data from 2 field sites in eastern Canada. The DAYCENT model was used to simulate fluxes of carbon (C) and N between soil, vegetation, and the atmosphere on a daily basis. The model contained a submodel that considered the scheduling of management events; a parameter for considering drainage related to soil texture; a submodel that considered the effect of solar radiation on plant growth; a simulation module of seed germination as a function of soil temperature, growth and harvest; and submodel of water table depths. The DeNitrification DeComposition (DNDC) model consisted of 4 submodels: (1) soil and climate; (2) crop vegetation; (3) decomposition; and (4) a denitrification model that operated on an hourly time step and was activated when soil moisture increased or when soil and oxygen availability decreased. Results of the comparative evaluation showed that the DNDC model accurately predicted total N{sub 2}O emissions from both test sites. However, the timing of emissions peaks was inaccurate, and emissions predictions from individual treatments were also incorrect. The DAYCENT model underpredicted emissions from most treatment regimes due to its prediction of lower mineralization rates. Simplistic soil water routines and a 1-D approach were used to overcome data limitations in both models, and results of the study suggested that the mechanisms were not able to characterize soil hydraulics in some soils. It was concluded that the mechanisms used to characterize the distribution and mineralization of N must be revised in both models after hydrology routines are optimized. 20 refs., 5 tabs., 3 figs.
Estimating Travel Time in Bank Filtration Systems from a Numerical Model Based on DTS Measurements.
des Tombe, Bas F; Bakker, Mark; Schaars, Frans; van der Made, Kees-Jan
2018-03-01
An approach is presented to determine the seasonal variations in travel time in a bank filtration system using a passive heat tracer test. The temperature in the aquifer varies seasonally because of temperature variations of the infiltrating surface water and at the soil surface. Temperature was measured with distributed temperature sensing along fiber optic cables that were inserted vertically into the aquifer with direct push equipment. The approach was applied to a bank filtration system consisting of a sequence of alternating, elongated recharge basins and rows of recovery wells. A SEAWAT model was developed to simulate coupled flow and heat transport. The model of a two-dimensional vertical cross section is able to simulate the temperature of the water at the well and the measured vertical temperature profiles reasonably well. MODPATH was used to compute flowpaths and the travel time distribution. At the study site, temporal variation of the pumping discharge was the dominant factor influencing the travel time distribution. For an equivalent system with a constant pumping rate, variations in the travel time distribution are caused by variations in the temperature-dependent viscosity. As a result, travel times increase in the winter, when a larger fraction of the water travels through the warmer, lower part of the aquifer, and decrease in the summer, when the upper part of the aquifer is warmer. © 2017 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.
Future oil production in Brazil-Estimates based on a Hubbert model
International Nuclear Information System (INIS)
Szklo, Alexandre; Machado, Giovani; Schaeffer, Roberto
2007-01-01
This paper forecasts oil production in Brazil, according to the Hubbert model and different probabilities for adding reserves. It analyzes why the Hubbert model might be more appropriate to the Brazilian oil industry than that of Hotelling, as it implicitly emphasizes the impacts of information and depletion on the derivative over time of the accumulated discoveries. Brazil's oil production curves indicate production peaks with a time lag of more than 15 years, depending on the certainty (degree of information) associated with the reserves. Reserves with 75% certainty peak at 3.27 Mbpd in 2020, while reserves with 50% certainty peak at 3.28 Mbpd in 2028, and with 30% certainty peak at 3.88 Mbpd in 2036. These findings show that Brazil oil industry is in a stage where the positive impacts of information on expanding reserves (mainly through discoveries) may outstrip the negative impacts of depletion. The still limited number of wells drilled by accumulated discoveries also explain this assertion. Being a characteristic of frontier areas such as Brazil, this indicates the need for ongoing exploratory efforts
Estimating the impact of land use change on surface energy partition based on the Noah model
Chen, Shaohui; Su, Hongbo; Zhan, Jinyan
2014-03-01
It is well known that land use has an important impact on surface energy partition. It is important to study the evolving trend of the partition of sensible heat flux (SHF) and latent heat flux (LHF) from the net radiance (NR) with land use change in the context of regional climate changes. In this paper, we studied the response of energy partition to land use using the Noah model. First, the Noah model simulation results of SHF and LHF between 2003 and 2005 were comprehensively validated using the observation data from the Changbai Mountain Station, the Xilinhot Station, and the Yucheng Station. The study domains represent three different types of land use change: excessive deforestation, grassland degeneration aggravation, and groundwater level decline, respectively. The study period was subsequently extended from 2015 through 2034, using four projected land use maps and forcing data from Princeton (2000-2004). The simulation results show that during the land use conversions, the annual average of LHF drops by 10.7%, rises by 10.1%, and drops by 11.5% for the Changbai Mountain, Inner Mongolia, and Northern China stations, respectively while the annual average of SHF rises by 10.6%, drops by 10.1%, and drops by 11.3% for the three areas.
Demand estimation of bus as a public transport based on gravity model
Directory of Open Access Journals (Sweden)
Asmael Noor
2018-01-01
Full Text Available Bus as a public transport is a suitable service to meet the travel demand between any two zones. Baghdad faced with severe traffic problems along with the development in city size and economy. Passengers have to wait lots of time during commutation to work because of the serious traffic jams. In the last years, rate of car ownership has increased as income levels have gone up and cars have become a preferable mode of transport. Bus, as the only public mode of transport available, is suffering from inconvenience, slowness, and inflexibility. A big emphasis must be given to the public transport system because it introduces an active utilization of limited resources, energy and land. This study determines the demand of public routes for buses using boarding / alighting values to generate a model and assign these demand values to the bus network. Five public routes were selected to collect the required data. Ride check and Point check survey was conducted for each selected route. The results of this study were public demand assigned to the selected bus routes, dwell time, load factor and headway. It is observed that R1 and R3 have the heaviest travel demand; they need special study to improve bus performance and make better transit. The model developed with only limited data available to predict travel demand will assist transportation planners and related agencies in decision making.
Model for traffic emissions estimation
Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.
A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.
Koike, Narihiko; Ii, Satoshi; Yoshinaga, Tsukasa; Nozaki, Kazunori; Wada, Shigeo
2017-11-07
This paper presents a novel inverse estimation approach for the active contraction stresses of tongue muscles during speech. The proposed method is based on variational data assimilation using a mechanical tongue model and 3D tongue surface shapes for speech production. The mechanical tongue model considers nonlinear hyperelasticity, finite deformation, actual geometry from computed tomography (CT) images, and anisotropic active contraction by muscle fibers, the orientations of which are ideally determined using anatomical drawings. The tongue deformation is obtained by solving a stationary force-equilibrium equation using a finite element method. An inverse problem is established to find the combination of muscle contraction stresses that minimizes the Euclidean distance of the tongue surfaces between the mechanical analysis and CT results of speech production, where a signed-distance function represents the tongue surface. Our approach is validated through an ideal numerical example and extended to the real-world case of two Japanese vowels, /ʉ/ and /ɯ/. The results capture the target shape completely and provide an excellent estimation of the active contraction stresses in the ideal case, and exhibit similar tendencies as in previous observations and simulations for the actual vowel cases. The present approach can reveal the relative relationship among the muscle contraction stresses in similar utterances with different tongue shapes, and enables the investigation of the coordination of tongue muscles during speech using only the deformed tongue shape obtained from medical images. This will enhance our understanding of speech motor control. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Mikko Niilo-Rämä
2014-06-01
Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.
Hartman, J.S.; Weisberg, P.J.; Pillai, R.; Ericksen, J.A.; Kuiken, T.; Lindberg, S.E.; Zhang, H.; Rytuba, J.J.; Gustin, M.S.
2009-01-01
Ecosystems that have low mercury (Hg) concentrations (i.e., not enriched or impactedbygeologic or anthropogenic processes) cover most of the terrestrial surface area of the earth yet their role as a net source or sink for atmospheric Hg is uncertain. Here we use empirical data to develop a rule-based model implemented within a geographic information system framework to estimate the spatial and temporal patterns of Hg flux for semiarid deserts, grasslands, and deciduous forests representing 45% of the continental United States. This exercise provides an indication of whether these ecosystems are a net source or sink for atmospheric Hg as well as a basis for recommendation of data to collect in future field sampling campaigns. Results indicated that soil alone was a small net source of atmospheric Hg and that emitted Hg could be accounted for based on Hg input by wet deposition. When foliar assimilation and wet deposition are added to the area estimate of soil Hg flux these biomes are a sink for atmospheric Hg. ?? 2009 American Chemical Society.
International Nuclear Information System (INIS)
2010-01-01
Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM and FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the
Energy Technology Data Exchange (ETDEWEB)
Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX
2010-08-25
Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the
Chen, Yanling; Gong, Adu; Li, Jing; Wang, Jingmei
2017-04-01
Accurate crop growth monitoring and yield predictive information are significant to improve the sustainable development of agriculture and ensure the security of national food. Remote sensing observation and crop growth simulation models are two new technologies, which have highly potential applications in crop growth monitoring and yield forecasting in recent years. However, both of them have limitations in mechanism or regional application respectively. Remote sensing information can not reveal crop growth and development, inner mechanism of yield formation and the affection of environmental meteorological conditions. Crop growth simulation models have difficulties in obtaining data and parameterization from single-point to regional application. In order to make good use of the advantages of these two technologies, the coupling technique of remote sensing information and crop growth simulation models has been studied. Filtering and optimizing model parameters are key to yield estimation by remote sensing and crop model based on regional crop assimilation. Winter wheat of GaoCheng was selected as the experiment object in this paper. And then the essential data was collected, such as biochemical data and farmland environmental data and meteorological data about several critical growing periods. Meanwhile, the image of environmental mitigation small satellite HJ-CCD was obtained. In this paper, research work and major conclusions are as follows. (1) Seven vegetation indexes were selected to retrieve LAI, and then linear regression model was built up between each of these indexes and the measured LAI. The result shows that the accuracy of EVI model was the highest (R2=0.964 at anthesis stage and R2=0.920 at filling stage). Thus, EVI as the most optimal vegetation index to predict LAI in this paper. (2) EFAST method was adopted in this paper to conduct the sensitive analysis to the 26 initial parameters of the WOFOST model and then a sensitivity index was constructed
Thomas J. Brandeis; Maria Del Rocio; Suarez Rozo
2005-01-01
Total aboveground live tree biomass in Puerto Rican lower montane wet, subtropical wet, subtropical moist and subtropical dry forests was estimated using data from two forest inventories and published regression equations. Multiple potentially-applicable published biomass models existed for some forested life zones, and their estimates tended to diverge with increasing...
DEFF Research Database (Denmark)
Badger, Jake; Frank, Helmut; Hahmann, Andrea N.
2014-01-01
This paper demonstrates that a statistical dynamical method can be used to accurately estimate the wind climate at a wind farm site. In particular, postprocessing of mesoscale model output allows an efficient calculation of the local wind climate required for wind resource estimation at a wind...
International Nuclear Information System (INIS)
Bachoc, F.
2013-01-01
The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr
Daniell, James; Pomonis, Antonios; Gunasekera, Rashmin; Ishizawa, Oscar; Gaspari, Maria; Lu, Xijie; Aubrecht, Christoph; Ungar, Joachim
2017-04-01
In order to quantify disaster risk, there is a demand and need for determining consistent and reliable economic value of built assets at national or sub national level exposed to natural hazards. The value of the built stock in the context of a city or a country is critical for risk modelling applications as it allows for the upper bound in potential losses to be established. Under the World Bank probabilistic disaster risk assessment - Country Disaster Risk Profiles (CDRP) Program and rapid post-disaster loss analyses in CATDAT, key methodologies have been developed that quantify the asset exposure of a country. In this study, we assess the complementary methods determining value of building stock through capital investment data vs aggregated ground up values based on built area and unit cost of construction analyses. Different approaches to modelling exposure around the world, have resulted in estimated values of built assets of some countries differing by order(s) of magnitude. Using the aforementioned methodology of comparing investment data based capital stock and bottom-up unit cost of construction values per square meter of assets; a suitable range of capital stock estimates for built assets have been created. A blind test format was undertaken to compare the two types of approaches from top-down (investment) and bottom-up (construction cost per unit), In many cases, census data, demographic, engineering and construction cost data are key for bottom-up calculations from previous years. Similarly for the top-down investment approach, distributed GFCF (Gross Fixed Capital Formation) data is also required. Over the past few years, numerous studies have been undertaken through the World Bank Caribbean and Central America disaster risk assessment program adopting this methodology initially developed by Gunasekera et al. (2015). The range of values of the building stock is tested for around 15 countries. In addition, three types of costs - Reconstruction cost
Wu, Man Li C.; Schubert, Siegfried; Lin, Ching I.; Stajner, Ivanka; Einaudi, Franco (Technical Monitor)
2000-01-01
A method is developed for validating model-based estimates of atmospheric moisture and ground temperature using satellite data. The approach relates errors in estimates of clear-sky longwave fluxes at the top of the Earth-atmosphere system to errors in geophysical parameters. The fluxes include clear-sky outgoing longwave radiation (CLR) and radiative flux in the window region between 8 and 12 microns (RadWn). The approach capitalizes on the availability of satellite estimates of CLR and RadWn and other auxiliary satellite data, and multiple global four-dimensional data assimilation (4-DDA) products. The basic methodology employs off-line forward radiative transfer calculations to generate synthetic clear-sky longwave fluxes from two different 4-DDA data sets. Simple linear regression is used to relate the clear-sky longwave flux discrepancies to discrepancies in ground temperature ((delta)T(sub g)) and broad-layer integrated atmospheric precipitable water ((delta)pw). The slopes of the regression lines define sensitivity parameters which can be exploited to help interpret mismatches between satellite observations and model-based estimates of clear-sky longwave fluxes. For illustration we analyze the discrepancies in the clear-sky longwave fluxes between an early implementation of the Goddard Earth Observing System Data Assimilation System (GEOS2) and a recent operational version of the European Centre for Medium-Range Weather Forecasts data assimilation system. The analysis of the synthetic clear-sky flux data shows that simple linear regression employing (delta)T(sub g)) and broad layer (delta)pw provides a good approximation to the full radiative transfer calculations, typically explaining more thin 90% of the 6 hourly variance in the flux differences. These simple regression relations can be inverted to "retrieve" the errors in the geophysical parameters, Uncertainties (normalized by standard deviation) in the monthly mean retrieved parameters range from 7% for
Directory of Open Access Journals (Sweden)
Charreire Hélène
2011-01-01
Full Text Available Abstract Background There is growing interest in the study of the relationships between individual health-related behaviours (e.g. food intake and physical activity and measurements of spatial accessibility to the associated facilities (e.g. food outlets and sport facilities. The aim of this study is to propose measurements of spatial accessibility to facilities on the regional scale, using aggregated data. We first used a potential accessibility model that partly makes it possible to overcome the limitations of the most frequently used indices such as the count of opportunities within a given neighbourhood. We then propose an extended model in order to take into account both home and work-based accessibility for a commuting population. Results Potential accessibility estimation provides a very different picture of the accessibility levels experienced by the population than the more classical "number of opportunities per census tract" index. The extended model for commuters increases the overall accessibility levels but this increase differs according to the urbanisation level. Strongest increases are observed in some rural municipalities with initial low accessibility levels. Distance to major urban poles seems to play an essential role. Conclusions Accessibility is a multi-dimensional concept that should integrate some aspects of travel behaviour. Our work supports the evidence that the choice of appropriate accessibility indices including both residential and non-residential environmental features is necessary. Such models have potential implications for providing relevant information to policy-makers in the field of public health.
A method for state-of-charge estimation of Li-ion batteries based on multi-model switching strategy
International Nuclear Information System (INIS)
Wang, Yujie; Zhang, Chenbin; Chen, Zonghai
2015-01-01
Highlights: • Build a multi-model switching SOC estimate method for Li-ion batteries. • Build an improved interpretative structural modeling method for model switching. • The feedback strategy of bus delay is applied to improve the real-time performance. • The EKF method is used for SOC estimation to improve the estimated accuracy. - Abstract: The accurate state-of-charge (SOC) estimation and real-time performance are critical evaluation indexes for Li-ion battery management systems (BMS). High accuracy algorithms often take long program execution time (PET) in the resource-constrained embedded application systems, which will undoubtedly lead to the decrease of the time slots of other processes, thereby reduce the overall performance of BMS. Considering the resource optimization and the computational load balance, this paper proposes a multi-model switching SOC estimation method for Li-ion batteries. Four typical battery models are employed to build a close-loop SOC estimation system. The extended Kalman filter (EKF) method is employed to eliminate the effect of the current noise and improve the accuracy of SOC. The experiments under dynamic current conditions are conducted to verify the accuracy and real-time performance of the proposed method. The experimental results indicate that accurate estimation results and reasonable PET can be obtained by the proposed method
El Gharamti, Mohamad; Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim
2015-01-01
The ensemble Kalman filter (EnKF) recursively integrates field data into simulation models to obtain a better characterization of the model’s state and parameters. These are generally estimated following a state-parameters joint augmentation
Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J
2014-02-01
Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model
International Nuclear Information System (INIS)
Plansky, L.E.; Seitz, R.R.
1994-02-01
This report documents user instructions for several simplified subroutines and driver programs that can be used to estimate various aspects of the long-term performance of cement-based barriers used in low-level radioactive waste disposal facilities. The subroutines are prepared in a modular fashion to allow flexibility for a variety of applications. Three levels of codes are provided: the individual subroutines, interactive drivers for each of the subroutines, and an interactive main driver, CEMENT, that calls each of the individual drivers. The individual subroutines for the different models may be taken independently and used in larger programs, or the driver modules can be used to execute the subroutines separately or as part of the main driver routine. A brief program description is included and user-interface instructions for the individual subroutines are documented in the main report. These are intended to be used when the subroutines are used as subroutines in a larger computer code
Software Cost-Estimation Model
Tausworthe, R. C.
1985-01-01
Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-03-01
Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License
International Nuclear Information System (INIS)
Perevertov, Oleksiy
2003-01-01
The classical Preisach model (PM) of magnetic hysteresis requires that any minor differential permeability curve lies under minor curves with larger field amplitude. Measurements of ferromagnetic materials show that very often this is not true. By applying the classical PM formalism to measured minor curves one can discover that it leads to an oval-shaped region on each half of the Preisach plane where the calculations produce negative values in the Preisach function. Introducing an effective field, which differs from the applied one by a mean-field term proportional to the magnetization, usually solves this problem. Complex techniques exist to estimate the minimum necessary proportionality constant (the moving parameter). In this paper we propose a simpler way to estimate the mean-field effects for use in nondestructive testing, which is based on experience from the measurements of industrial steels. A new parameter (parameter of shift) is introduced, which monitors the mean-field effects. The relation between the shift parameter and the moving one was studied for a number of steels. From preliminary experiments no correlation was found between the shift parameter and the classical magnetic ones such as the coercive field, maximum differential permeability and remanent magnetization
Cao, Bibo; Li, Chuan; Liu, Yan; Zhao, Yue; Sha, Jian; Wang, Yuqiu
2015-05-01
Because water quality monitoring sections or sites could reflect the water quality status of rivers, surface water quality management based on water quality monitoring sections or sites would be effective. For the purpose of improving water quality of rivers, quantifying the contribution ratios of pollutant resources to a specific section is necessary. Because physical and chemical processes of nutrient pollutants are complex in water bodies, it is difficult to quantitatively compute the contribution ratios. However, water quality models have proved to be effective tools to estimate surface water quality. In this project, an enhanced QUAL2Kw model with an added module was applied to the Xin'anjiang Watershed, to obtain water quality information along the river and to assess the contribution ratios of each pollutant source to a certain section (the Jiekou state-controlled section). Model validation indicated that the results were reliable. Then, contribution ratios were analyzed through the added module. Results show that among the pollutant sources, the Lianjiang tributary contributes the largest part of total nitrogen (50.43%), total phosphorus (45.60%), ammonia nitrogen (32.90%), nitrate (nitrite + nitrate) nitrogen (47.73%), and organic nitrogen (37.87%). Furthermore, contribution ratios in different reaches varied along the river. Compared with pollutant loads ratios of different sources in the watershed, an analysis of contribution ratios of pollutant sources for each specific section, which takes the localized chemical and physical processes into consideration, was more suitable for local-regional water quality management. In summary, this method of analyzing the contribution ratios of pollutant sources to a specific section based on the QUAL2Kw model was found to support the improvement of the local environment.
Directory of Open Access Journals (Sweden)
M. Memarianfard
2016-10-01
Full Text Available In many industrialized areas, the highest concentration of particulate matter, as a major concern on public health, is being felt worldwide problem. Since the air pollution assessment and its evaluation with considering spatial dispersion analysis because of various factors are complex, in this paper, GIS-based modeling approach was utilized to zoning PM2.5 dispersion over Tehran, during one year, from 21 March 2014 to 20 March 2015. The RBF method was applied to obtain the zoning maps and determining the highest concentration of PM2.5 in the 22 Tehran’s regions for each season. The RMSEmin values according to the number of neighbors and types of functions in the radial basis function method, including completely regularized spline, Spline with tension, Multiquadric function, Inverse multiquadric function, and Thin-plate spline for each month have been assessed. By performing analysis on the errors, the numbers of neighbors were estimated. The numbers of neighbors in the model for each function were varied from 2 to 30. The results indicate that the models with 3 and 4 neighbors have the best performance with the lowest RMSE values with using RBF method. The highest PM2.5 concentrations have been occurred in the summer and winter especially at the center, south, and in some cases at northeast of the city.
International Nuclear Information System (INIS)
Akahane, K.; Kai, M.; Konishi, E.; Kusama, T.; Aoki, Y.
1995-01-01
The biokinetic model in ICRP 53 is used for calculating absorbed dose to each organ of a patient in nuclear medicine. The ICRP model is a simple compartment model based on human data; however, the model cannot produce the biokinetics of radiopharmaceuticals under various physiological conditions. On the other hand, a physiologically based pharmacokinetics model (PBPK model) can describe the flow of radiopharmaceuticals as a compartment model for any physiological conditions theoretically. The PBPK model was applied especially for the kidney-bladder dynamics, and similar results obtained compared with the ICRP model. This suggests the possibility of the PBPK model for predicting the biokinetics of radiopharmaceuticals under various physiological conditions. (Author)
Statistical inference based on latent ability estimates
Hoijtink, H.J.A.; Boomsma, A.
The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use
Laurence, Caroline O; Heywood, Troy; Bell, Janice; Atkinson, Kaye; Karnon, Jonathan
2018-03-27
Health workforce planning models have been developed to estimate the future health workforce requirements for a population whom they serve and have been used to inform policy decisions. To adapt and further develop a need-based GP workforce simulation model to incorporate current and estimated geographic distribution of patients and GPs. A need-based simulation model that estimates the supply of GPs and levels of services required in South Australia (SA) was adapted and applied to the Western Australian (WA) workforce. The main outcome measure was the differences in the number of full-time equivalent (FTE) GPs supplied and required from 2013 to 2033. The base scenario estimated a shortage of GPs in WA from 2019 onwards with a shortage of 493 FTE GPs in 2033, while for SA, estimates showed an oversupply over the projection period. The WA urban and rural models estimated an urban shortage of GPs over this period. A reduced international medical graduate recruitment scenario resulted in estimated shortfalls of GPs by 2033 for WA and SA. The WA-specific scenarios of lower population projections and registrar work value resulted in a reduced shortage of FTE GPs in 2033, while unfilled training places increased the shortfall of FTE GPs in 2033. The simulation model incorporates contextual differences to its structure that allows within and cross jurisdictional comparisons of workforce estimations. It also provides greater insights into the drivers of supply and demand and the impact of changes in workforce policy, promoting more informed decision-making.
Observer-Based Human Knee Stiffness Estimation.
Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen
2017-05-01
We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.
Directory of Open Access Journals (Sweden)
Renxin Xiao
2016-03-01
Full Text Available In order to properly manage lithium-ion batteries of electric vehicles (EVs, it is essential to build the battery model and estimate the state of charge (SOC. In this paper, the fractional order forms of Thevenin and partnership for a new generation of vehicles (PNGV models are built, of which the model parameters including the fractional orders and the corresponding resistance and capacitance values are simultaneously identified based on genetic algorithm (GA. The relationships between different model parameters and SOC are established and analyzed. The calculation precisions of the fractional order model (FOM and integral order model (IOM are validated and compared under hybrid test cycles. Finally, extended Kalman filter (EKF is employed to estimate the SOC based on different models. The results prove that the FOMs can simulate the output voltage more accurately and the fractional order EKF (FOEKF can estimate the SOC more precisely under dynamic conditions.
International Nuclear Information System (INIS)
Kim, Man Cheol; Seong, Poong Hyun
2000-01-01
In the nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model to Dynamic Safety System(DDS) shows that the estimated reliability of the system is quite reasonable and realistic
International Nuclear Information System (INIS)
Kim, Man Cheol; Seong, Poong Hyun
2000-01-01
In nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model of dynamic safety system (DSS) shows that the estimated reliability of the system is quite reasonable and realistic. (author)
Breen, Michael S; Long, Thomas C; Schultz, Bradley D; Crooks, James; Breen, Miyuki; Langstaff, John E; Isaacs, Kristin K; Tan, Yu-Mei; Williams, Ronald W; Cao, Ye; Geller, Andrew M; Devlin, Robert B; Batterman, Stuart A; Buckley, Timothy J
2014-07-01
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared with 24-h diary data from nine participants, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time-location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies.
Breen, Michael S.; Long, Thomas C.; Schultz, Bradley D.; Crooks, James; Breen, Miyuki; Langstaff, John E.; Isaacs, Kristin K.; Tan, Yu-Mei; Williams, Ronald W.; Cao, Ye; Geller, Andrew M.; Devlin, Robert B.; Batterman, Stuart A.; Buckley, Timothy J.
2014-01-01
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared with 24-h diary data from nine participants, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time–location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies. PMID:24619294
Solar radiation estimation based on the insolation
International Nuclear Information System (INIS)
Assis, F.N. de; Steinmetz, S.; Martins, S.R.; Mendez, M.E.G.
1998-01-01
A series of daily global solar radiation data measured by an Eppley pyranometer was used to test PEREIRA and VILLA NOVA’s (1997) model to estimate the potential of radiation based on the instantaneous values measured at solar noon. The model also allows to estimate the parameters of PRESCOTT’s equation (1940) assuming a = 0,29 cosj. The results demonstrated the model’s validity for the studied conditions. Simultaneously, the hypothesis of generalizing the use of the radiation estimative formulas based on insolation, and using K = Ko (0,29 cosj + 0,50 n/N), was analysed and confirmed [pt
Ochoa-Rodriguez, S; Wang, L; Simoes, N; Onof, C; Maksimovi?, ?
2013-01-01
24/07/14 meb. Authors did not sign CTA. Traditionally, urban storm water drainage models have been calibrated using only raingauge data, which may result in overly conservative models due to the lack of spatial description of rainfall. With the advent of weather radars, radar rainfall estimates with higher temporal and spatial resolution have become increasingly available and have started to be used operationally for urban storm water model calibration and real time operation. Nonetheless,...
Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P
2018-01-01
The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.
LiDAR and IFSAR-Based Flood Inundation Model Estimates for Flood-Prone Areas of Afghanistan
Johnson, W. C.; Goldade, M. M.; Kastens, J.; Dobbs, K. E.; Macpherson, G. L.
2014-12-01
Extreme flood events are not unusual in semi-arid to hyper-arid regions of the world, and Afghanistan is no exception. Recent flashfloods and flashflood-induced landslides took nearly 100 lives and destroyed or damaged nearly 2000 homes in 12 villages within Guzargah-e-Nur district of Baghlan province in northeastern Afghanistan. With available satellite imagery, flood-water inundation estimation can be accomplished remotely, thereby providing a means to reduce the impact of such flood events by improving shared situational awareness during major flood events. Satellite orbital considerations, weather, cost, data licensing restrictions, and other issues can often complicate the acquisition of appropriately timed imagery. Given the need for tools to supplement imagery where not available, complement imagery when it is available, and bridge the gap between imagery based flood mapping and traditional hydrodynamic modeling approaches, we have developed a topographic floodplain model (FLDPLN), which has been used to identify and map river valley floodplains with elevation data ranging from 90-m SRTM to 1-m LiDAR. Floodplain "depth to flood" (DTF) databases generated by FLDPLN are completely seamless and modular. FLDPLN has been applied in Afghanistan to flood-prone areas along the northern and southern flanks of the Hindu Kush mountain range to generate a continuum of 1-m increment flood-event models up to 10 m in depth. Elevation data used in this application of FLDPLN included high-resolution, drone-acquired LiDAR (~1 m) and IFSAR (5 m; INTERMAP). Validation of the model has been accomplished using the best available satellite-derived flood inundation maps, such as those issued by Unitar's Operational Satellite Applications Programme (UNOSAT). Results provide a quantitative approach to evaluating the potential risk to urban/village infrastructure as well as to irrigation systems, agricultural fields and archaeological sites.
Directory of Open Access Journals (Sweden)
Stefanie M. Herrmann
2013-10-01
Full Text Available Field trees are an integral part of the farmed parkland landscape in West Africa and provide multiple benefits to the local environment and livelihoods. While field trees have received increasing interest in the context of strengthening resilience to climate variability and change, the actual extent of farmed parkland and spatial patterns of tree cover are largely unknown. We used the rule-based predictive modeling tool Cubist® to estimate field tree cover in the west-central agricultural region of Senegal. A collection of rules and associated multiple linear regression models was constructed from (1 a reference dataset of percent tree cover derived from very high spatial resolution data (2 m Orbview as the dependent variable, and (2 ten years of 10-day 250 m Moderate Resolution Imaging Spectrometer (MODIS Normalized Difference Vegetation Index (NDVI composites and derived phenological metrics as independent variables. Correlation coefficients between modeled and reference percent tree cover of 0.88 and 0.77 were achieved for training and validation data respectively, with absolute mean errors of 1.07 and 1.03 percent tree cover. The resulting map shows a west-east gradient from high tree cover in the peri-urban areas of horticulture and arboriculture to low tree cover in the more sparsely populated eastern part of the study area. A comparison of current (2000s tree cover along this gradient with historic cover as seen on Corona images reveals dynamics of change but also areas of remarkable stability of field tree cover since 1968. The proposed modeling approach can help to identify locations of high and low tree cover in dryland environments and guide ground studies and management interventions aimed at promoting the integration of field trees in agricultural systems.
International Nuclear Information System (INIS)
Demirhan, Haydar; Kayhan Atilgan, Yasemin
2015-01-01
Highlights: • Precise horizontal global solar radiation estimation models are proposed for Turkey. • Genetic programming technique is used to construct the models. • Robust coplot analysis is applied to reduce the impact of outlier observations. • Better estimation and prediction properties are observed for the models. - Abstract: Renewable energy sources have been attracting more and more attention of researchers due to the diminishing and harmful nature of fossil energy sources. Because of the importance of solar energy as a renewable energy source, an accurate determination of significant covariates and their relationships with the amount of global solar radiation reaching the Earth is a critical research problem. There are numerous meteorological and terrestrial covariates that can be used in the analysis of horizontal global solar radiation. Some of these covariates are highly correlated with each other. It is possible to find a large variety of linear or non-linear models to explain the amount of horizontal global solar radiation. However, models that explain the amount of global solar radiation with the smallest set of covariates should be obtained. In this study, use of the robust coplot technique to reduce the number of covariates before going forward with advanced modelling techniques is considered. After reducing the dimensionality of model space, yearly and monthly mean daily horizontal global solar radiation estimation models for Turkey are built by using the genetic programming technique. It is observed that application of robust coplot analysis is helpful for building precise models that explain the amount of global solar radiation with the minimum number of covariates without suffering from outlier observations and the multicollinearity problem. Consequently, over a dataset of Turkey, precise yearly and monthly mean daily global solar radiation estimation models are introduced using the model spaces obtained by robust coplot technique and
Directory of Open Access Journals (Sweden)
S. Khatiwala
2012-04-01
Full Text Available The global ocean has taken up a large fraction of the CO2 released by human activities since the industrial revolution. Quantifying the oceanic anthropogenic carbon (Cant inventory and its variability is important for predicting the future global carbon cycle. The detailed comparison of data-based and model-based estimates is essential for the validation and continued improvement of our prediction capabilities. So far, three global estimates of oceanic Cant inventory that are "data-based" and independent of global ocean circulation models have been produced: one based on the Δ C* method, and two that are based on constraining surface-to-interior transport of tracers, the TTD method and a maximum entropy inversion method (GF. The GF method, in particular, is capable of reconstructing the history of Cant inventory through the industrial era. In the present study we use forward model simulations of the Community Climate System Model (CCSM3.1 to estimate the Cant inventory and compare the results with the data-based estimates. We also use the simulations to test several assumptions of the GF method, including the assumption of constant climate and circulation, which is common to all the data-based estimates. Though the integrated estimates of global Cant inventories are consistent with each other, the regional estimates show discrepancies up to 50 %. The CCSM3 model underestimates the total Cant inventory, in part due to weak mixing and ventilation in the North Atlantic and Southern Ocean. Analyses of different simulation results suggest that key assumptions about ocean circulation and air-sea disequilibrium in the GF method are generally valid on the global scale, but may introduce errors in Cant estimates on regional scales. The GF method should also be used with caution when predicting future oceanic anthropogenic carbon uptake.
International Nuclear Information System (INIS)
Akiko, Furuno; Hideyuki, Kitabata
2003-01-01
Full text: The importance of computer-based decision support systems for local and regional scale accidents has been recognized by many countries with the experiences of accidental atmospheric releases of radionuclides at Chernobyl in 1986 in the former Soviet Union. The recent increase of nuclear power plants in the Asian region also necessitates an emergency response system for Japan to predict the long-range atmospheric dispersion of radionuclides due to overseas accident. On the basis of these backgrounds, WSPEEDI (Worldwide version of System for Prediction of Environmental Emergency Dose Information) at Japan Atomic Energy Research Institute is developed to forecast long-range atmospheric dispersions of radionuclides during nuclear emergency. Although the source condition is critical parameter for accurate prediction, it is rarely that the condition can be acquired in the early stage of overseas accident. Thus, we have been developing a computer-based function to estimate radioactive source term, e.g. the release point, time and amount, as a part of WSPEEDI. This function consists of atmospheric transport simulations and statistical analysis for the prediction and monitoring of air dose rates. Atmospheric transport simulations are carried out for the matrix of possible release points in Eastern Asia and possible release times. The simulation results of air dose rates are compared with monitoring data and the best fitted release condition is defined as source term. This paper describes the source term estimation method and the application to Eastern Asia. The latest version of WSPEEDI accommodates following two models: an atmospheric meteorological model MM5 and a particle random walk model GEARN. MM5 is a non-hydrostatic meteorological model developed by the Pennsylvania State University and the National Center for Atmospheric Research (NCAR). MM5 physically calculates more than 40 meteorological parameters with high resolution in time and space based an
International Nuclear Information System (INIS)
Zheng, Linfeng; Zhang, Lei; Zhu, Jianguo; Wang, Guoxiu; Jiang, Jiuchun
2016-01-01
Highlights: • The numerical solution for an electrochemical model is presented. • Trinal PI observers are used to concurrently estimate SOC, capacity and resistance. • An iteration-approaching method is incorporated to enhance estimation performance. • The robustness against aging and temperature variations is experimentally verified. - Abstract: Lithium-ion batteries have been widely used as enabling energy storage in many industrial fields. Accurate modeling and state estimation play fundamental roles in ensuring safe, reliable and efficient operation of lithium-ion battery systems. A physics-based electrochemical model (EM) is highly desirable for its inherent ability to push batteries to operate at their physical limits. For state-of-charge (SOC) estimation, the continuous capacity fade and resistance deterioration are more prone to erroneous estimation results. In this paper, trinal proportional-integral (PI) observers with a reduced physics-based EM are proposed to simultaneously estimate SOC, capacity and resistance for lithium-ion batteries. Firstly, a numerical solution for the employed model is derived. PI observers are then developed to realize the co-estimation of battery SOC, capacity and resistance. The moving-window ampere-hour counting technique and the iteration-approaching method are also incorporated for the estimation accuracy improvement. The robustness of the proposed approach against erroneous initial values, different battery cell aging levels and ambient temperatures is systematically evaluated, and the experimental results verify the effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Alessandro Cassini
2016-10-01
associated with the highest burden because of their high severity. The cumulative burden of the six HAIs was higher than the total burden of all other 32 communicable diseases included in the BCoDE 2009-2013 study. The main limitations of the study are the variability in the parameter estimates, in particular the disease models' case fatalities, and the use of the Rhame and Sudderth formula for estimating incident number of cases from prevalence data.We estimated the EU/EEA burden of HAIs in DALYs in 2011-2012 using a transparent and evidence-based approach that allows for combining estimates of morbidity and of mortality in order to compare with other diseases and to inform a comprehensive ranking suitable for prioritization. Our results highlight the high burden of HAIs and the need for increased efforts for their prevention and control. Furthermore, our model should allow for estimations of the potential benefit of preventive measures on the burden of HAIs in the EU/EEA.
Leonard, Jeremy A.; Tan, Yu-Mei; Gilbert, Mary; Isaacs, Kristin; El-Masri, Hisham
2016-01-01
Some pharmaceuticals and environmental chemicals bind the thyroid peroxidase (TPO) enzyme and disrupt thyroid hormone production. The potential for TPO inhibition is a function of both the binding affinity and concentration of the chemical within the thyroid gland. The former can be determined through in vitro assays, and the latter is influenced by pharmacokinetic properties, along with environmental exposure levels. In this study, a physiologically based pharmacokinetic (PBPK) model was integrated with a pharmacodynamic (PD) model to establish internal doses capable of inhibiting TPO in relation to external exposure levels predicted through exposure modeling. The PBPK/PD model was evaluated using published serum or thyroid gland chemical concentrations or circulating thyroxine (T4) and triiodothyronine (T3) hormone levels measured in rats and humans. After evaluation, the model was used to estimate human equivalent intake doses resulting in reduction of T4 and T3 levels by 10% (ED10) for 6 chemicals of varying TPO-inhibiting potencies. These chemicals were methimazole, 6-propylthiouracil, resorcinol, benzophenone-2, 2-mercaptobenzothiazole, and triclosan. Margin of exposure values were estimated for these chemicals using the ED10 and predicted population exposure levels for females of child-bearing age. The modeling approach presented here revealed that examining hazard or exposure alone when prioritizing chemicals for risk assessment may be insufficient, and that consideration of pharmacokinetic properties is warranted. This approach also provides a mechanism for integrating in vitro data, pharmacokinetic properties, and exposure levels predicted through high-throughput means when interpreting adverse outcome pathways based on biological responses. PMID:26865668
Petersen, Øyvind Wiig
2014-01-01
Force identification in structural dynamics is an inverse problem concerned with finding loads from measured structural response. The main objective of this thesis is to perform and study state (displacement and velocity) and force estimation by Kalman filtering. Theory on optimal control and state-space models are presented, adapted to linear structural dynamics. Accommodation for measurement noise and model inaccuracies are attained by stochastic-deterministic coupling. Explicit requirem...
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
DEFF Research Database (Denmark)
Silva, Filipe Faria Da; Bak, Claus Leth; Balle Holst, Per
2012-01-01
. If the area is too large, the simulation requires a long period of time and numerical problems are more likely to exist. This paper proposes a method that can be used to estimate the depth of the modeling area using the grid layout, which can be obtained directly from a PSS/E file, or equivalent...
CSIR Research Space (South Africa)
Bencherif, H
2010-09-01
Full Text Available The present reports on the use of a multi-regression model adapted at Reunion University for temperature and ozone trend estimates. Depending on the location of the observing site, the studied geophysical signal is broken down in form of a sum...
Directory of Open Access Journals (Sweden)
Anuradhani Kasturiratne
2008-11-01
Full Text Available BACKGROUND: Envenoming resulting from snakebites is an important public health problem in many tropical and subtropical countries. Few attempts have been made to quantify the burden, and recent estimates all suffer from the lack of an objective and reproducible methodology. In an attempt to provide an accurate, up-to-date estimate of the scale of the global problem, we developed a new method to estimate the disease burden due to snakebites. METHODS AND FINDINGS: The global estimates were based on regional estimates that were, in turn, derived from data available for countries within a defined region. Three main strategies were used to obtain primary data: electronic searching for publications on snakebite, extraction of relevant country-specific mortality data from databases maintained by United Nations organizations, and identification of grey literature by discussion with key informants. Countries were grouped into 21 distinct geographic regions that are as epidemiologically homogenous as possible, in line with the Global Burden of Disease 2005 study (Global Burden Project of the World Bank. Incidence rates for envenoming were extracted from publications and used to estimate the number of envenomings for individual countries; if no data were available for a particular country, the lowest incidence rate within a neighbouring country was used. Where death registration data were reliable, reported deaths from snakebite were used; in other countries, deaths were estimated on the basis of observed mortality rates and the at-risk population. We estimate that, globally, at least 421,000 envenomings and 20,000 deaths occur each year due to snakebite. These figures may be as high as 1,841,000 envenomings and 94,000 deaths. Based on the fact that envenoming occurs in about one in every four snakebites, between 1.2 million and 5.5 million snakebites could occur annually. CONCLUSIONS: Snakebites cause considerable morbidity and mortality worldwide. The
Energy Technology Data Exchange (ETDEWEB)
Holden, Jacob [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Van Til, Harrison J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gonder, Jeffrey D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhu, Lei [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2018-02-09
A data-informed model to predict energy use for a proposed vehicle trip has been developed in this paper. The methodology leverages nearly 1 million miles of real-world driving data to generate the estimation model. Driving is categorized at the sub-trip level by average speed, road gradient, and road network geometry, then aggregated by category. An average energy consumption rate is determined for each category, creating an energy rates look-up table. Proposed vehicle trips are then categorized in the same manner, and estimated energy rates are appended from the look-up table. The methodology is robust and applicable to almost any type of driving data. The model has been trained on vehicle global positioning system data from the Transportation Secure Data Center at the National Renewable Energy Laboratory and validated against on-road fuel consumption data from testing in Phoenix, Arizona. The estimation model has demonstrated an error range of 8.6% to 13.8%. The model results can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations to reduce energy consumption. This work provides a highly extensible framework that allows the model to be tuned to a specific driver or vehicle type.
Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo
2009-01-01
New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
Eckmanns, Tim; Abu Sin, Muna; Ducomble, Tanja; Harder, Thomas; Sixtensson, Madlen; Velasco, Edward; Weiß, Bettina; Kramarz, Piotr; Monnet, Dominique L.; Kretzschmar, Mirjam E.; Suetens, Carl
2016-01-01
. HAP and HA primary BSI were associated with the highest burden because of their high severity. The cumulative burden of the six HAIs was higher than the total burden of all other 32 communicable diseases included in the BCoDE 2009–2013 study. The main limitations of the study are the variability in the parameter estimates, in particular the disease models’ case fatalities, and the use of the Rhame and Sudderth formula for estimating incident number of cases from prevalence data. Conclusions We estimated the EU/EEA burden of HAIs in DALYs in 2011–2012 using a transparent and evidence-based approach that allows for combining estimates of morbidity and of mortality in order to compare with other diseases and to inform a comprehensive ranking suitable for prioritization. Our results highlight the high burden of HAIs and the need for increased efforts for their prevention and control. Furthermore, our model should allow for estimations of the potential benefit of preventive measures on the burden of HAIs in the EU/EEA. PMID:27755545
Conditional shape models for cardiac motion estimation
DEFF Research Database (Denmark)
Metz, Coert; Baka, Nora; Kirisli, Hortense
2010-01-01
We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...
Vrazic, Sacha
2015-08-01
Preventing car accidents by monitoring the driver's physiological parameters is of high importance. However, existing measurement methods are not robust to driver's body movements. In this paper, a system that estimates the heartbeat from the seat embedded piezoelectric sensors, and that is robust to strong body movements is presented. Multifractal q-Hurst exponents are used within a classifier to predict the most probable best sensor signal to be used in an Interactive Multi-Model Extended Kalman Filter pulsation estimation procedure. The car vibration noise is reduced using an autoregressive exogenous model to predict the noise on sensors. The performance of the proposed system was evaluated on real driving data up to 100 km/h and with slaloms at high speed. It is shown that this method improves by 36.7% the pulsation estimation under strong body movement compared to static sensor pulsation estimation and appears to provide reliable pulsation variability information for top-level analysis of drowsiness or other conditions.
van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V
2017-03-21
Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul
2015-10-01
Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.
International Nuclear Information System (INIS)
Alamir, M.; Witrant, E.; Della Valle, G.; Rouaud, O.; Josset, Ch.; Boillereaux, L.
2013-01-01
In this paper, a reduced order mechanistic model is proposed for the evolution of temperature and humidity during French bread baking. The model parameters are identified using experimental data. The resulting model is then used to estimate the potential energy saving that can be obtained using jet impingement technology when used to increase the heat transfer efficiency. Results show up to 16% potential energy saving under certain assumptions. - Highlights: ► We developed a mechanistic model of heat and mass transfer in bread including different and multiple energy sources. ► An optimal control system permits to track references trajectories with a minimization of energy consuming. ► The methodology is evaluated with jet impingement technique. ► Results show a significant energy saving of about 17% of energy with reasonable actuator variations
Clock error models for simulation and estimation
International Nuclear Information System (INIS)
Meditch, J.S.
1981-10-01
Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction
INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS
Hong, Sungjoon; Oguchi, Takashi
In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.
Directory of Open Access Journals (Sweden)
Shunsuke Doi
2017-11-01
Full Text Available Accessibility to healthcare service providers, the quantity, and the quality of them are important for national health. In this study, we focused on geographic accessibility to estimate and evaluate future demand and supply of healthcare services. We constructed a simulation model called the patient access area model (PAAM, which simulates patients’ access time to healthcare service institutions using a geographic information system (GIS. Using this model, to evaluate the balance of future healthcare services demand and supply in small areas, we estimated the number of inpatients every five years in each area and compared it with the number of hospital beds within a one-hour drive from each area. In an experiment with the Tokyo metropolitan area as a target area, when we assumed hospital bed availability to be 80%, it was predicted that over 78,000 inpatients would not receive inpatient care in 2030. However, this number would decrease if we lowered the rate of inpatient care by 10% and the average length of the hospital stay. Using this model, recommendations can be made regarding what action should be undertaken and by when to prevent a dramatic increase in healthcare demand. This method can help plan the geographical resource allocation in healthcare services for healthcare policy.
International Nuclear Information System (INIS)
Ang, M R C O; Gonzalez, R M; Castro, P P M
2014-01-01
Rainfall, one of the important elements of the hydrologic cycle, is also the most difficult to model. Thus, accurate rainfall estimation is necessary especially in localized catchment areas where variability of rainfall is extremely high. Moreover, early warning of severe rainfall through timely and accurate estimation and forecasting could help prevent disasters from flooding. This paper presents the development of two rainfall estimation models that utilize a NARX-based neural network architecture namely: REIINN 1 and REIINN 2. These REIINN models, or Rainfall Estimation by Information Integration using Neural Networks, were trained using MTSAT cloud-top temperature (CTT) images and rainfall rates from the combined rain gauge and TMPA 3B40RT datasets. Model performance was assessed using two metrics – root mean square error (RMSE) and correlation coefficient (R). REIINN 1 yielded an RMSE of 8.1423 mm/3h and an overall R of 0.74652 while REIINN 2 yielded an RMSE of 5.2303 and an overall R of 0.90373. The results, especially that of REIINN 2, are very promising for satellite-based rainfall estimation in a catchment scale. It is believed that model performance and accuracy will greatly improve with a denser and more spatially distributed in-situ rainfall measurements to calibrate the model with. The models proved the viability of using remote sensing images, with their good spatial coverage, near real time availability, and relatively inexpensive to acquire, as an alternative source for rainfall estimation to complement existing ground-based measurements
Directory of Open Access Journals (Sweden)
Milčić Dragan S.
2012-01-01
Full Text Available Friction stir welding is a solid-state welding technique that utilizes thermomechanical influence of the rotating welding tool on parent material resulting in a monolith joint - weld. On the contact of welding tool and parent material, significant stirring and deformation of parent material appears, and during this process, mechanical energy is partially transformed into heat. Generated heat affects the temperature of the welding tool and parent material, thus the proposed analytical model for the estimation of the amount of generated heat can be verified by temperature: analytically determined heat is used for numerical estimation of the temperature of parent material and this temperature is compared to the experimentally determined temperature. Numerical solution is estimated using the finite difference method - explicit scheme with adaptive grid, considering influence of temperature on material's conductivity, contact conditions between welding tool and parent material, material flow around welding tool, etc. The analytical model shows that 60-100% of mechanical power given to the welding tool is transformed into heat, while the comparison of results shows the maximal relative difference between the analytical and experimental temperature of about 10%.
Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello
2017-11-01
State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.
International Nuclear Information System (INIS)
Yang, Fangfang; Xing, Yinjiao; Wang, Dong; Tsui, Kwok-Leung
2016-01-01
Highlights: • Three different model-based filtering algorithms for SOC estimation are compared. • A combined dynamic loading profile is proposed to evaluate the three algorithms. • Robustness against uncertainty of initial states of SOC estimators are investigated. • Battery capacity degradation is considered in SOC estimation. - Abstract: Accurate state-of-charge (SOC) estimation is critical for the safety and reliability of battery management systems in electric vehicles. Because SOC cannot be directly measured and SOC estimation is affected by many factors, such as ambient temperature, battery aging, and current rate, a robust SOC estimation approach is necessary to be developed so as to deal with time-varying and nonlinear battery systems. In this paper, three popular model-based filtering algorithms, including extended Kalman filter, unscented Kalman filter, and particle filter, are respectively used to estimate SOC and their performances regarding to tracking accuracy, computation time, robustness against uncertainty of initial values of SOC, and battery degradation, are compared. To evaluate the performances of these algorithms, a new combined dynamic loading profile composed of the dynamic stress test, the federal urban driving schedule and the US06 is proposed. The comparison results showed that the unscented Kalman filter is the most robust to different initial values of SOC, while the particle filter owns the fastest convergence ability when an initial guess of SOC is far from a true initial SOC.
Hayes, Daniel J.; Turner, David P.; Stinson, Graham; McGuire, A. David; Wei, Yaxing; West, Tristram O.; Heath, Linda S.; de Jong, Bernardus; McConkey, Brian G.; Birdsey, Richard A.; Kurz, Werner A.; Jacobson, Andrew R.; Huntzinger, Deborah N.; Pan, Yude; Post, W. Mac; Cook, Robert B.
2012-01-01
We develop an approach for estimating net ecosystem exchange (NEE) using inventory-based information over North America (NA) for a recent 7-year period (ca. 2000–2006). The approach notably retains information on the spatial distribution of NEE, or the vertical exchange between land and atmosphere of all non-fossil fuel sources and sinks of CO2, while accounting for lateral transfers of forest and crop products as well as their eventual emissions. The total NEE estimate of a -327 ± 252 TgC yr-1 sink for NA was driven primarily by CO2 uptake in the Forest Lands sector (-248 TgC yr-1), largely in the Northwest and Southeast regions of the US, and in the Crop Lands sector (-297 TgC yr-1), predominantly in the Midwest US states. These sinks are counteracted by the carbon source estimated for the Other Lands sector (+218 TgC yr-1), where much of the forest and crop products are assumed to be returned to the atmosphere (through livestock and human consumption). The ecosystems of Mexico are estimated to be a small net source (+18 TgC yr-1) due to land use change between 1993 and 2002. We compare these inventory-based estimates with results from a suite of terrestrial biosphere and atmospheric inversion models, where the mean continental-scale NEE estimate for each ensemble is -511 TgC yr-1 and -931 TgC yr-1, respectively. In the modeling approaches, all sectors, including Other Lands, were generally estimated to be a carbon sink, driven in part by assumed CO2 fertilization and/or lack of consideration of carbon sources from disturbances and product emissions. Additional fluxes not measured by the inventories, although highly uncertain, could add an additional -239 TgC yr-1 to the inventory-based NA sink estimate, thus suggesting some convergence with the modeling approaches.
Directory of Open Access Journals (Sweden)
Yuanyuan Liu
2013-08-01
Full Text Available Accurate estimation of the state of charge (SOC of batteries is one of the key problems in a battery management system. This paper proposes an adaptive SOC estimation method based on unscented Kalman filter algorithms for lithium (Li-ion batteries. First, an enhanced battery model is proposed to include the impacts due to different discharge rates and temperatures. An adaptive joint estimation of the battery SOC and battery internal resistance is then presented to enhance system robustness with battery aging. The SOC estimation algorithm has been developed and verified through experiments on different types of Li-ion batteries. The results indicate that the proposed method provides an accurate SOC estimation and is computationally efficient, making it suitable for embedded system implementation.
International Nuclear Information System (INIS)
Ye, Min; Guo, Hui; Cao, Binggang
2017-01-01
Highlights: • Propose an improved adaptive particle swarm filter method. • The SoC estimation method for the battery based on the adaptive particle swarm filter is presented. • The algorithm is validated by the case study of different aged extent batteries. • The effectiveness and applicability of the algorithm are validated by the LiPB batteries. - Abstract: Obtaining accurate parameters, state of charge (SoC) and capacity of a lithium-ion battery is crucial for a battery management system, and establishing a battery model online is complex. In addition, the errors and perturbations of the battery model dramatically increase throughout the battery lifetime, making it more challenging to model the battery online. To overcome these difficulties, this paper provides three contributions: (1) To improve the robustness of the adaptive particle filter algorithm, an error analysis method is added to the traditional adaptive particle swarm algorithm. (2) An online adaptive SoC estimator based on the improved adaptive particle filter is presented; this estimator can eliminate the estimation error due to battery degradation and initial SoC errors. (3) The effectiveness of the proposed method is verified using various initial states of lithium nickel manganese cobalt oxide (NMC) cells and lithium-ion polymer (LiPB) batteries. The experimental analysis shows that the maximum errors are less than 1% for both the voltage and SoC estimations and that the convergence time of the SoC estimation decreased to 120 s.
Directory of Open Access Journals (Sweden)
Tianxiang Cui
2017-12-01
Full Text Available Accurately quantifying gross primary production (GPP is of vital importance to understanding the global carbon cycle. Light-use efficiency (LUE models and process-based models have been widely used to estimate GPP at different spatial and temporal scales. However, large uncertainties remain in quantifying GPP, especially for croplands. Recently, remote measurements of solar-induced chlorophyll fluorescence (SIF have provided a new perspective to assess actual levels of plant photosynthesis. In the presented study, we evaluated the performance of three approaches, including the LUE-based multi-source data synergized quantitative (MuSyQ GPP algorithm, the process-based boreal ecosystem productivity simulator (BEPS model, and the SIF-based statistical model, in estimating the diurnal courses of GPP at a maize site in Zhangye, China. A field campaign was conducted to acquire synchronous far-red SIF (SIF760 observations and flux tower-based GPP measurements. Our results showed that both SIF760 and GPP were linearly correlated with APAR, and the SIF760-GPP relationship was adequately characterized using a linear function. The evaluation of the modeled GPP against the GPP measured from the tower demonstrated that all three approaches provided reasonable estimates, with R2 values of 0.702, 0.867, and 0.667 and RMSE values of 0.247, 0.153, and 0.236 mg m−2 s−1 for the MuSyQ-GPP, BEPS and SIF models, respectively. This study indicated that the BEPS model simulated the GPP best due to its efficiency in describing the underlying physiological processes of sunlit and shaded leaves. The MuSyQ-GPP model was limited by its simplification of some critical ecological processes and its weakness in characterizing the contribution of shaded leaves. The SIF760-based model demonstrated a relatively limited accuracy but showed its potential in modeling GPP without dependency on climate inputs in short-term studies.
Predicting dermal penetration for ToxCast chemicals using in silico estimates for diffusion in combination with physiologically based pharmacokinetic (PBPK) modeling.Evans, M.V., Sawyer, M.E., Isaacs, K.K, and Wambaugh, J.With the development of efficient high-throughput (HT) in ...
International Nuclear Information System (INIS)
He, Hongwen; Zhang, Xiaowei; Xiong, Rui; Xu, Yongli; Guo, Hongqiang
2012-01-01
This paper presents a method to estimate the state-of-charge (SOC) of a lithium-ion battery, based on an online identification of its open-circuit voltage (OCV), according to the battery’s intrinsic relationship between the SOC and the OCV for application in electric vehicles. Firstly an equivalent circuit model with n RC networks is employed modeling the polarization characteristic and the dynamic behavior of the lithium-ion battery, the corresponding equations are built to describe its electric behavior and a recursive function is deduced for the online identification of the OCV, which is implemented by a recursive least squares (RLS) algorithm with an optimal forgetting factor. The models with different RC networks are evaluated based on the terminal voltage comparisons between the model-based simulation and the experiment. Then the OCV-SOC lookup table is built based on the experimental data performed by a linear interpolation of the battery voltages at the same SOC during two consecutive discharge and charge cycles. Finally a verifying experiment is carried out based on nine Urban Dynamometer Driving Schedules. It indicates that the proposed method can ensure an acceptable accuracy of SOC estimation for online application with a maximum error being less than 5.0%. -- Highlights: ► An equivalent circuit model with n RC networks is built for lithium-ion batteries. ► A recursive function is deduced for the online estimation of the model parameters like OCV and R O . ► The relationship between SOC and OCV is built with a linear interpolation method by experiments. ► The experiments show the online model-based SOC estimation is reasonable with enough accuracy.
International Nuclear Information System (INIS)
Hua, L Z; Liu, H; Zhang, X L; Zheng, Y; Man, W; Yin, K
2014-01-01
Net Primary Productivity (NPP) is a key component of the terrestrial carbon cycle. The research of net primary productivity will help in understanding the amount of carbon fixed by terrestrial vegetation and its influencing factors. Model simulation is considered as a cost-effective and time-efficient method for the estimation of regional and global NPP. In the paper, a terrestrial biosphere model, CASA (Carnegie Ames Stanford Approach), was applied to estimate monthly NPP in Minnan urban agglomeration (i.e. Xiamen, Zhangzhou and Quanzhou cities) of Fujian province, China, in 2009 and 2010, by incorporating satellite observation of SPOT Vegetation NDVI data together with other climatic parameters and landuse map. The model estimates average annual terrestrial NPP of Minnan area as 16.3 million Mg C. NPP decreased from southwest to the northeast. The higher NPP values exceeding 720 gC·m − 2 ·a −1 showed in North Zhangzhou city and lower values under 500 gC·m − 2 ·a −1 showed in the some areas of northeast Quanzhou city. Seasonal variations of NPP were large. It was about 45% of the total annual NPP in the three months in summer, and the NPP values were very low in winter. From 2009 to 2010, the value of annual NPP showed a slightly decrease trend, approximately 7.8% because the annual temperature for 2010 decline 13.6% compared with 2009 in despite of an increase in rainfall of about 34.3%. The results indicate that temperature was a main limiting factor on vegetation growth, but water is not a limiting factor in the rainy area
Energy Technology Data Exchange (ETDEWEB)
Lam, Long
2011-08-23
In this thesis the development of the state of health of Li-ion battery cells under possible real-life operating conditions in electric cars has been characterised. Furthermore, a practical circuit-based model for Li-ion cells has been developed that is capable of modelling the cell voltage behaviour under various operating conditions. The Li-ion cell model can be implemented in simulation programs and be directly connected to a model of the rest of the electronic system in electric vehicles. Most existing battery models are impractical for electric vehicle system designers and require extensive background knowledge of electrochemistry to be implemented. Furthermore, many models do not take the effect of regenerative braking into account and are obtained from testing fully charged cells. However, in real-life applications electric vehicles are not always fully charged and utilise regenerative braking to save energy. To obtain a practical circuit model based on real operating conditions and to model the state of health of electric vehicle cells, numerous 18650 size LiFePO4 cells have been tested under possible operating conditions. Capacity fading was chosen as the state of health parameter, and the capacity fading of different cells was compared with the charge processed instead of cycles. Tests have shown that the capacity fading rate is dependent on temperature, charging C-rate, state of charge and depth of discharge. The obtained circuit model is capable of simulating the voltage behaviour under various temperatures and C-rates with a maximum error of 14mV. However, modelling the effect of different temperatures and C-rates increases the complexity of the model. The model is easily adjustable and the choice is given to the electric vehicle system designer to decide which operating conditions to take into account. By combining the test results for the capacity fading and the proposed circuit model, recommendations to optimise the battery lifetime are proposed.
Directory of Open Access Journals (Sweden)
Bai-Jian Wei
2016-09-01
Full Text Available Resin transfer molding (RTM is a popular manufacturing technique that produces fiber reinforced polymer (FRP composites. In this paper, a model-assisted flow front control system is developed based on real-time estimation of permeability/porosity ratio using the information acquired by a visualization system. In the proposed control system, a radial basis function (RBF network meta-model is utilized to predict the position of the future flow front by inputting the injection pressure, the current position of flow front, and the estimated ratio. By conducting optimization based on the meta-model, the value of injection pressure to be implemented at each step is obtained. Moreover, a cascade control structure is established to further improve the control performance. Experiments show that the developed system successfully enhances the performance of flow front control in RTM. Especially, the cascade structure makes the control system robust to model mismatch.
Guo, Shiyi; Mai, Ying; Zhao, Hongying; Gao, Pengqi
2013-05-01
The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.
Risk Probability Estimating Based on Clustering
DEFF Research Database (Denmark)
Chen, Yong; Jensen, Christian D.; Gray, Elizabeth
2003-01-01
of prior experiences, recommendations from a trusted entity or the reputation of the other entity. In this paper we propose a dynamic mechanism for estimating the risk probability of a certain interaction in a given environment using hybrid neural networks. We argue that traditional risk assessment models...... from the insurance industry do not directly apply to ubiquitous computing environments. Instead, we propose a dynamic mechanism for risk assessment, which is based on pattern matching, classification and prediction procedures. This mechanism uses an estimator of risk probability, which is based...
View Estimation Based on Value System
Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru
Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Ramesh, Adepu; Ashritha, Kilari; Kumar, Molugaram
2018-04-01
Walking has always been a prime source of human mobility for short distance travel. Traffic congestion has become a major problem for safe pedestrian crossing in most of the metropolitan cities. This has emphasized for providing a sufficient pedestrian gap for safe crossing on urban road. The present works aims in understanding factors that influence pedestrian crossing behaviour. Four locations were chosen for identification of pedestrian crossing behaviour, gap characteristics, waiting time etc., in Hyderabad city. From the study it was observed that pedestrian behaviour and crossing patterns are different and is influenced by land use pattern. A gap acceptance model was developed from the data for improving pedestrian safety at mid-block location; the model was validated using the existing data. Pedestrian delay was estimated at intersection using Highway Capacity Manual (HCM). It was observed that field delays are less when compared to delay arrived from HCM method.
Vakilian, Katayon; Mousavi, Seyed Abbas; Keramat, Afsaneh
2014-01-13
In many countries, negative social attitude towards sensitive issues such as sexual behavior has resulted in false and invalid data concerning this issue.This is an analytical cross-sectional study, in which a total number of 1500 single students from universities of Shahroud City were sampled using a multi stage technique. The students were assured that their information disclosed for the researcher will be treated as private and confidential. The results were analyzed using crosswise model, Crosswise Regression, T-test and Chi-square tests. It seems that the prevalence of sexual behavior among Iranian youth is 41% (CI = 36-53). Findings showed that estimation sexual relationship in Iranian single youth is high. Thus, devising training models according to the Islamic-Iranian culture is necessary in order to prevent risky sexual behavior.
International Nuclear Information System (INIS)
Zeng Zhenzhong; Piao Shilong; Yin Guodong; Peng Shushi; Lin Xin; Ciais, Philippe; Myneni, Ranga B
2012-01-01
We applied a land water mass balance equation over 59 major river basins during 2003–9 to estimate evapotranspiration (ET), using as input terrestrial water storage anomaly (TWSA) data from the GRACE satellites, precipitation and in situ runoff measurements. We found that the terrestrial water storage change cannot be neglected in the estimation of ET on an annual time step, especially in areas with relatively low ET values. We developed a spatial regression model of ET by integrating precipitation, temperature and satellite-derived normalized difference vegetation index (NDVI) data, and used this model to extrapolate the spatio-temporal patterns of changes in ET from 1982 to 2009. We found that the globally averaged land ET is about 604 mm yr −1 with a range of 558–650 mm yr −1 . From 1982 to 2009, global land ET was found to increase at a rate of 1.10 mm yr −2 , with the Amazon regions and Southeast Asia showing the highest ET increasing trend. Further analyses, however, show that the increase in global land ET mainly occurred between the 1980s and the 1990s. The trend over the 2000s, its magnitude or even the sign of change substantially depended on the choice of the beginning year. This suggests a non-significant trend in global land ET over the last decade. (letter)
Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina
2012-03-01
Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.
International Nuclear Information System (INIS)
Fares, Robert L.; Meyers, Jeremy P.; Webber, Michael E.
2014-01-01
Highlights: • A model is implemented to describe the dynamic voltage of a vanadium flow battery. • The model is used with optimization to maximize the utility of the battery. • A vanadium flow battery’s value for regulation service is approximately $1500/kW. - Abstract: Building on past work seeking to value emerging energy storage technologies in grid-based applications, this paper introduces a dynamic model-based framework to value a vanadium redox flow battery (VRFB) participating in Texas’ organized electricity market. Our model describes the dynamic behavior of a VRFB system’s voltage and state of charge based on the instantaneous charging or discharging power required from the battery. We formulate an optimization problem that incorporates the model to show the potential value of a VRFB used for frequency regulation service in Texas. The optimization is implemented in Matlab using the large-scale, interior-point, nonlinear optimization algorithm, with the objective function gradient, nonlinear constraint gradients, and Hessian matrix specified analytically. Utilizing market prices and other relevant data from the Electric Reliability Council of Texas (ERCOT), we find that a VRFB system used for frequency regulation service could be worth approximately $1500/kW
DEFF Research Database (Denmark)
Berg, Casper Willestofte; Nielsen, Anders; Kristensen, Kasper
2014-01-01
Indices of abundance from fishery-independent trawl surveys constitute an important source of information for many fish stock assessments. Indices are often calculated using area stratified sample means on age-disaggregated data, and finally treated in stock assessment models as independent...... observations. We evaluate a series of alternative methods for calculating indices of abundance from trawl survey data (delta-lognormal, delta-gamma, and Tweedie using Generalized Additive Models) as well as different error structures for these indices when used as input in an age-based stock assessment model...... the different indices produced. The stratified mean method is found much more imprecise than the alternatives based on GAMs, which are found to be similar. Having time-varying index variances is found to be of minor importance, whereas the independence assumption is not only violated but has significant impact...
Modeling of Yield Estimation for The Main Crops in Iran Based on Mechanization Index (hp ha-1
Directory of Open Access Journals (Sweden)
K Abbasi
2014-09-01
Full Text Available Agricultural mechanization is a method for transiting from traditional agriculture towards industrial and sustainable one. Due to the limitation of natural resources and increasing population we need to have economical production of agricultural crops. For reaching this destination; agricultural mechanization has a remarkable role. So it is necessary to have an extensive view for mechanization, because with the help of mechanization the agricultural inputs such as seeds, fertilizer and even water and soil can effectively be managed for an economical and sustainable production. This study has been carried out in many provinces of Iran. The data of agricultural tractors and cereal combine harvesters were firstly gathered by means of questionnaire. The tractors were categorized in four power levels of less than 45, 45 to 80, 80 to 110, and more than 110 hp. In addition, it was also carried out for cereal combine harvesters; it was in three power levels, i.e. between 100 to 110, 110 to 155 and 155 to 210 horse-power in 3 ages, i.e. less than 13, between 13 to 20, and more than 20 years. Information regarding to cultivation areas, production volume, and yield of main crops gathered from statistics of Ministry of Jihad-e-Agriculture. Then agriculture mechanization level index (hp ha-1 in each province was calculated. Four main crops including irrigated and rain-fed wheat and irrigated and rain-fed barley, which met the required criteria to be used in the model, were statistically analyzed. Correlation analysis was carried out in order to get an effective model between yield of the four main crops in Iran and agriculture mechanization level index. Pearson correlation index showed that there is a direct and significant correlation between these variables. Subsequently, outliers were identified in order to get a model with necessary efficiency to predict the yield through mechanization level index, by scatter diagram and estimating regression lines in 1
Teletactile System Based on Mechanical Properties Estimation
Directory of Open Access Journals (Sweden)
Mauro M. Sette
2011-01-01
Full Text Available Tactile feedback is a major missing feature in minimally invasive procedures; it is an essential means of diagnosis and orientation during surgical procedures. Previous works have presented a remote palpation feedback system based on the coupling between a pressure sensor and a general haptic interface. Here a new approach is presented based on the direct estimation of the tissue mechanical properties and finally their presentation to the operator by means of a haptic interface. The approach presents different technical difficulties and some solutions are proposed: the implementation of a fast Young’s modulus estimation algorithm, the implementation of a real time finite element model, and finally the implementation of a stiffness estimation approach in order to guarantee the system’s stability. The work is concluded with an experimental evaluation of the whole system.
Crivori, Patrizia; Zamora, Ismael; Speed, Bill; Orrenius, Christian; Poggesi, Italo
2004-03-01
A number of computational approaches are being proposed for an early optimization of ADME (absorption, distribution, metabolism and excretion) properties to increase the success rate in drug discovery. The present study describes the development of an in silico model able to estimate, from the three-dimensional structure of a molecule, the stability of a compound with respect to the human cytochrome P450 (CYP) 3A4 enzyme activity. Stability data were obtained by measuring the amount of unchanged compound remaining after a standardized incubation with human cDNA-expressed CYP3A4. The computational method transforms the three-dimensional molecular interaction fields (MIFs) generated from the molecular structure into descriptors (VolSurf and Almond procedures). The descriptors were correlated to the experimental metabolic stability classes by a partial least squares discriminant procedure. The model was trained using a set of 1800 compounds from the Pharmacia collection and was validated using two test sets: the first one including 825 compounds from the Pharmacia collection and the second one consisting of 20 known drugs. This model correctly predicted 75% of the first and 85% of the second test set and showed a precision above 86% to correctly select metabolically stable compounds. The model appears a valuable tool in the design of virtual libraries to bias the selection toward more stable compounds. Abbreviations: ADME - absorption, distribution, metabolism and excretion; CYP - cytochrome P450; MIFs - molecular interaction fields; HTS - high throughput screening; DDI - drug-drug interactions; 3D - three-dimensional; PCA - principal components analysis; CPCA - consensus principal components analysis; PLS - partial least squares; PLSD - partial least squares discriminant; GRIND - grid independent descriptors; GRID - software originally created and developed by Professor Peter Goodford.
Lanzafame, S; Giannelli, M; Garaci, F; Floris, R; Duggento, A; Guerrisi, M; Toschi, N
2016-05-01
An increasing number of studies have aimed to compare diffusion tensor imaging (DTI)-related parameters [e.g., mean diffusivity (MD), fractional anisotropy (FA), radial diffusivity (RD), and axial diffusivity (AD)] to complementary new indexes [e.g., mean kurtosis (MK)/radial kurtosis (RK)/axial kurtosis (AK)] derived through diffusion kurtosis imaging (DKI) in terms of their discriminative potential about tissue disease-related microstructural alterations. Given that the DTI and DKI models provide conceptually and quantitatively different estimates of the diffusion tensor, which can also depend on fitting routine, the aim of this study was to investigate model- and algorithm-dependent differences in MD/FA/RD/AD and anisotropy mode (MO) estimates in diffusion-weighted imaging of human brain white matter. The authors employed (a) data collected from 33 healthy subjects (20-59 yr, F: 15, M: 18) within the Human Connectome Project (HCP) on a customized 3 T scanner, and (b) data from 34 healthy subjects (26-61 yr, F: 5, M: 29) acquired on a clinical 3 T scanner. The DTI model was fitted to b-value =0 and b-value =1000 s/mm(2) data while the DKI model was fitted to data comprising b-value =0, 1000 and 3000/2500 s/mm(2) [for dataset (a)/(b), respectively] through nonlinear and weighted linear least squares algorithms. In addition to MK/RK/AK maps, MD/FA/MO/RD/AD maps were estimated from both models and both algorithms. Using tract-based spatial statistics, the authors tested the null hypothesis of zero difference between the two MD/FA/MO/RD/AD estimates in brain white matter for both datasets and both algorithms. DKI-derived MD/FA/RD/AD and MO estimates were significantly higher and lower, respectively, than corresponding DTI-derived estimates. All voxelwise differences extended over most of the white matter skeleton. Fractional differences between the two estimates [(DKI - DTI)/DTI] of most invariants were seen to vary with the invariant value itself as well as with MK
Mehdizadeh, Saeid
2018-04-01
Evapotranspiration (ET) is considered as a key factor in hydrological and climatological studies, agricultural water management, irrigation scheduling, etc. It can be directly measured using lysimeters. Moreover, other methods such as empirical equations and artificial intelligence methods can be used to model ET. In the recent years, artificial intelligence methods have been widely utilized to estimate reference evapotranspiration (ETo). In the present study, local and external performances of multivariate adaptive regression splines (MARS) and gene expression programming (GEP) were assessed for estimating daily ETo. For this aim, daily weather data of six stations with different climates in Iran, namely Urmia and Tabriz (semi-arid), Isfahan and Shiraz (arid), Yazd and Zahedan (hyper-arid) were employed during 2000-2014. Two types of input patterns consisting of weather data-based and lagged ETo data-based scenarios were considered to develop the models. Four statistical indicators including root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and mean absolute percentage error (MAPE) were used to check the accuracy of models. The local performance of models revealed that the MARS and GEP approaches have the capability to estimate daily ETo using the meteorological parameters and the lagged ETo data as inputs. Nevertheless, the MARS had the best performance in the weather data-based scenarios. On the other hand, considerable differences were not observed in the models' accuracy for the lagged ETo data-based scenarios. In the innovation of this study, novel hybrid models were proposed in the lagged ETo data-based scenarios through combination of MARS and GEP models with autoregressive conditional heteroscedasticity (ARCH) time series model. It was concluded that the proposed novel models named MARS-ARCH and GEP-ARCH improved the performance of ETo modeling compared to the single MARS and GEP. In addition, the external
Directory of Open Access Journals (Sweden)
Paula Furtună
2013-03-01
Full Text Available Climatic changes are representing one of the major challenges of our century, these being forcasted according to climate scenarios and models, which represent plausible and concrete images of future climatic conditions. The results of climate models comparison regarding future water resources and temperature regime trend can become a useful instrument for decision makers in choosing the most effective decisions regarding economic, social and ecologic levels. The aim of this article is the analysis of temperature and pluviometric variability at the closest grid point to Cluj-Napoca, based on data provided by six different regional climate models (RCMs. Analysed on 30 year periods (2001-2030,2031-2060 and 2061-2090, the mean temperature has an ascending general trend, with great varability between periods. The precipitation expressed trough percentage deviation shows a descending general trend, which is more emphazied during 2031-2060 and 2061-2090.
Directory of Open Access Journals (Sweden)
Varvara Sergeyevna Spirina
2015-03-01
Full Text Available Objective to research and elaborate an economicmathematical model of predicting of commercial property attendance by the example of shopping malls based on the estimation of its attraction for consumers. Methods the methodological and theoretical basis for the work was composed of the rules and techniques of elaborating the qualimetry and matrix mechanisms of complex estimation necessary for the estimation and aggregation of factors influencing the choice of a consumersrsquo group among many alternative property venues. Results two mechanisms are elaborated for the complex estimation of commercial property which is necessary to evaluate their attraction for consumers and to predict attendance. By the example of two large shopping malls in Perm Russia it is shown that using both mechanisms in the economicmathematical model of commercial property attendance increases the accuracy of its predictions compared to the traditional Huff model. The reliability of the results is confirmed by the coincidence of the results of calculation and the actual poll data on the shopping malls attendance. Scientific novelty a multifactor model of commercial property attraction for consumers was elaborated by the example of shopping malls parameters of complex estimation mechanisms are defined namely eight parameters influencing the choice of a shopping mall by consumers. The model differs from the traditional Huff model by the number of factors influencing the choice of a shopping mall by consumers and by the higher accuracy of predicting its attendance. Practical significance the economicmathematical models able to predict commercial property attendance can be used for efficient planning of measures to attract consumers to preserve and develop competitive advantages of commercial property. nbsp
National Aeronautics and Space Administration — This article presented a discussion on uncertainty representation and management for model-based prog- nostics methodologies based on the Bayesian tracking framework...
Abdulredha, Muhammad; Al Khaddar, Rafid; Jordan, David; Kot, Patryk; Abdulridha, Ali; Hashim, Khalid
2018-04-26
Major-religious festivals hosted in the city of Kerbala, Iraq, annually generate large quantities of Municipal Solid Waste (MSW) which negatively impacts the environment and human health when poorly managed. The hospitality sector, specifically hotels, is one of the major sources of MSW generated during these festivals. Because it is essential to establish a proper waste management system for such festivals, accurate information regarding MSW generation is required. This study therefore investigated the rate of production of MSW from hotels in Kerbala during major festivals. A field questionnaire survey was conducted with 150 hotels during the Arba'een festival, one of the largest festivals in the world, attended by about 18 million participants, to identify how much MSW is produced and what features of hotels impact on this. Hotel managers responded to questions regarding features of the hotel such as size (Hs), expenditure (Hex), area (Ha) and number of staff (Hst). An on-site audit was also carried out with all participating hotels to estimate the mass of MSW generated from these hotels. The results indicate that MSW produced by hotels varies widely. In general, it was found that each hotel guest produces an estimated 0.89 kg of MSW per day. However, this figure varies according to the hotels' rating. Average rates of MSW production from one and four star hotels were 0.83 and 1.22 kg per guest per day, respectively. Statistically, it was found that the relationship between MSW production and hotel features can be modelled with an R 2 of 0.799, where the influence of hotel feature on MSW production followed the order Hs > Hex > Hst. Copyright © 2018 Elsevier Ltd. All rights reserved.
Gagnon, Pieter; Margolis, Robert; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan
2018-02-01
We provide a detailed estimate of the technical potential of rooftop solar photovoltaic (PV) electricity generation throughout the contiguous United States. This national estimate is based on an analysis of select US cities that combines light detection and ranging (lidar) data with a validated analytical method for determining rooftop PV suitability employing geographic information systems. We use statistical models to extend this analysis to estimate the quantity and characteristics of roofs in areas not covered by lidar data. Finally, we model PV generation for all rooftops to yield technical potential estimates. At the national level, 8.13 billion m2 of suitable roof area could host 1118 GW of PV capacity, generating 1432 TWh of electricity per year. This would equate to 38.6% of the electricity that was sold in the contiguous United States in 2013. This estimate is substantially higher than a previous estimate made by the National Renewable Energy Laboratory. The difference can be attributed to increases in PV module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity. Also notable, the nationwide percentage of buildings suitable for at least some PV deployment is high—82% for buildings smaller than 5000 ft2 and over 99% for buildings larger than that. In most states, rooftop PV could enable small, mostly residential buildings to offset the majority of average household electricity consumption. Even in some states with a relatively poor solar resource, such as those in the Northeast, the residential sector has the potential to offset around 100% of its total electricity consumption with rooftop PV.
Lee, J.; Kang, S.; Jang, K.; Ko, J.; Hong, S.
2012-12-01
Crop productivity is associated with the food security and hence, several models have been developed to estimate crop yield by combining remote sensing data with carbon cycle processes. In present study, we attempted to estimate crop GPP and NPP using algorithm based on the LUE model and a simplified respiration model. The state of Iowa and Illinois was chosen as the study site for estimating the crop yield for a period covering the 5 years (2006-2010), as it is the main Corn-Belt area in US. Present study focuses on developing crop-specific parameters for corn and soybean to estimate crop productivity and yield mapping using satellite remote sensing data. We utilized a 10 km spatial resolution daily meteorological data from WRF to provide cloudy-day meteorological variables but in clear-say days, MODIS-based meteorological data were utilized to estimate daily GPP, NPP, and biomass. County-level statistics on yield, area harvested, and productions were used to test model predicted crop yield. The estimated input meteorological variables from MODIS and WRF showed with good agreements with the ground observations from 6 Ameriflux tower sites in 2006. For examples, correlation coefficients ranged from 0.93 to 0.98 for Tmin and Tavg ; from 0.68 to 0.85 for daytime mean VPD; from 0.85 to 0.96 for daily shortwave radiation, respectively. We developed county-specific crop conversion coefficient, i.e. ratio of yield to biomass on 260 DOY and then, validated the estimated county-level crop yield with the statistical yield data. The estimated corn and soybean yields at the county level ranged from 671 gm-2 y-1 to 1393 gm-2 y-1 and from 213 gm-2 y-1 to 421 gm-2 y-1, respectively. The county-specific yield estimation mostly showed errors less than 10%. Furthermore, we estimated crop yields at the state level which were validated against the statistics data and showed errors less than 1%. Further analysis for crop conversion coefficient was conducted for 200 DOY and 280 DOY
Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation
Ogawa, Masatoshi; Ogai, Harutoshi
Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.
International Nuclear Information System (INIS)
He, Bin; Frey, Eric C
2006-01-01
Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based
Directory of Open Access Journals (Sweden)
Niancheng Zhou
2014-08-01
Full Text Available The influence of electric vehicle charging stations on power grid harmonics is becoming increasingly significant as their presence continues to grow. This paper studies the operational principles of the charging current in the continuous and discontinuous modes for a three-phase uncontrolled rectification charger with a passive power factor correction link, which is affected by the charging power. A parameter estimation method is proposed for the equivalent circuit of the charger by using the measured characteristic AC (Alternating Current voltage and current data combined with the charging circuit constraints in the conduction process, and this method is verified using an experimental platform. The sensitivity of the current harmonics to the changes in the parameters is analyzed. An analytical harmonic model of the charging station is created by separating the chargers into groups by type. Then, the harmonic current amplification caused by the shunt active power filter is researched, and the analytical formula for the overload factor is derived to further correct the capacity of the shunt active power filter. Finally, this method is validated through a field test of a charging station.
Engelhardt, Benjamin; Kschischo, Maik; Fröhlich, Holger
2017-06-01
Ordinary differential equations (ODEs) are a popular approach to quantitatively model molecular networks based on biological knowledge. However, such knowledge is typically restricted. Wrongly modelled biological mechanisms as well as relevant external influence factors that are not included into the model are likely to manifest in major discrepancies between model predictions and experimental data. Finding the exact reasons for such observed discrepancies can be quite challenging in practice. In order to address this issue, we suggest a Bayesian approach to estimate hidden influences in ODE-based models. The method can distinguish between exogenous and endogenous hidden influences. Thus, we can detect wrongly specified as well as missed molecular interactions in the model. We demonstrate the performance of our Bayesian dynamic elastic-net with several ordinary differential equation models from the literature, such as human JAK-STAT signalling, information processing at the erythropoietin receptor, isomerization of liquid α -Pinene, G protein cycling in yeast and UV-B triggered signalling in plants. Moreover, we investigate a set of commonly known network motifs and a gene-regulatory network. Altogether our method supports the modeller in an algorithmic manner to identify possible sources of errors in ODE-based models on the basis of experimental data. © 2017 The Author(s).
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2011-01-01
In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator
Schlegel, Nicole-Jeanne; Wiese, David N.; Larour, Eric Y.; Watkins, Michael M.; Box, Jason E.; Fettweis, Xavier; van den Broeke, Michiel R.
2016-09-01
Quantifying the Greenland Ice Sheet's future contribution to sea level rise is a challenging task that requires accurate estimates of ice sheet sensitivity to climate change. Forward ice sheet models are promising tools for estimating future ice sheet behavior, yet confidence is low because evaluation of historical simulations is challenging due to the scarcity of continental-wide data for model evaluation. Recent advancements in processing of Gravity Recovery and Climate Experiment (GRACE) data using Bayesian-constrained mass concentration ("mascon") functions have led to improvements in spatial resolution and noise reduction of monthly global gravity fields. Specifically, the Jet Propulsion Laboratory's JPL RL05M GRACE mascon solution (GRACE_JPL) offers an opportunity for the assessment of model-based estimates of ice sheet mass balance (MB) at ˜ 300 km spatial scales. Here, we quantify the differences between Greenland monthly observed MB (GRACE_JPL) and that estimated by state-of-the-art, high-resolution models, with respect to GRACE_JPL and model uncertainties. To simulate the years 2003-2012, we force the Ice Sheet System Model (ISSM) with anomalies from three different surface mass balance (SMB) products derived from regional climate models. Resulting MB is compared against GRACE_JPL within individual mascons. Overall, we find agreement in the northeast and southwest where MB is assumed to be primarily controlled by SMB. In the interior, we find a discrepancy in trend, which we presume to be related to millennial-scale dynamic thickening not considered by our model. In the northwest, seasonal amplitudes agree, but modeled mass trends are muted relative to GRACE_JPL. Here, discrepancies are likely controlled by temporal variability in ice discharge and other related processes not represented by our model simulations, i.e., hydrological processes and ice-ocean interaction. In the southeast, GRACE_JPL exhibits larger seasonal amplitude than predicted by the
Redemann, J.; Livingston, J. M.; Shinozuka, Y.; Kacenelenbogen, M. S.; Russell, P. B.; LeBlanc, S. E.; Vaughan, M.; Ferrare, R. A.; Hostetler, C. A.; Rogers, R. R.; Burton, S. P.; Torres, O.; Remer, L. A.; Stier, P.; Schutgens, N.
2014-12-01
We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). For the first time, we present comparisons of our multi-sensor aerosol direct radiative forcing estimates to values derived from a subset of models that participated in the latest AeroCom initiative. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.
Guzinski, R.; Anderson, M. C.; Kustas, W. P.; Nieto, H.; Sandholt, I.
2013-07-01
The Dual Temperature Difference (DTD) model, introduced by Norman et al. (2000), uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST) to estimate surface energy fluxes. By using a time-differential temperature measurement as input, the approach reduces model sensitivity to errors in absolute temperature retrieval. The original formulation of the DTD required an early morning LST observation (approximately 1 h after sunrise) when surface fluxes are minimal, limiting application to data provided by geostationary satellites at sub-hourly temporal resolution. The DTD model has been applied primarily during the active growth phase of agricultural crops and rangeland vegetation grasses, and has not been rigorously evaluated during senescence or in forested ecosystems. In this paper we present modifications to the DTD model that enable applications using thermal observations from polar orbiting satellites, such as Terra and Aqua, with day and night overpass times over the area of interest. This allows the application of the DTD model in high latitude regions where large viewing angles preclude the use of geostationary satellites, and also exploits the higher spatial resolution provided by polar orbiting satellites. A method for estimating nocturnal surface fluxes and a scheme for estimating the fraction of green vegetation are developed and evaluated. Modification for green vegetation fraction leads to significantly improved estimation of the heat fluxes from the vegetation canopy during senescence and in forests. When the modified DTD model is run with LST measurements acquired with the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Terra and Aqua satellites, generally satisfactory agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the Danish
Directory of Open Access Journals (Sweden)
R. Guzinski
2013-07-01
Full Text Available The Dual Temperature Difference (DTD model, introduced by Norman et al. (2000, uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST to estimate surface energy fluxes. By using a time-differential temperature measurement as input, the approach reduces model sensitivity to errors in absolute temperature retrieval. The original formulation of the DTD required an early morning LST observation (approximately 1 h after sunrise when surface fluxes are minimal, limiting application to data provided by geostationary satellites at sub-hourly temporal resolution. The DTD model has been applied primarily during the active growth phase of agricultural crops and rangeland vegetation grasses, and has not been rigorously evaluated during senescence or in forested ecosystems. In this paper we present modifications to the DTD model that enable applications using thermal observations from polar orbiting satellites, such as Terra and Aqua, with day and night overpass times over the area of interest. This allows the application of the DTD model in high latitude regions where large viewing angles preclude the use of geostationary satellites, and also exploits the higher spatial resolution provided by polar orbiting satellites. A method for estimating nocturnal surface fluxes and a scheme for estimating the fraction of green vegetation are developed and evaluated. Modification for green vegetation fraction leads to significantly improved estimation of the heat fluxes from the vegetation canopy during senescence and in forests. When the modified DTD model is run with LST measurements acquired with the Moderate Resolution Imaging Spectroradiometer (MODIS on board the Terra and Aqua satellites, generally satisfactory agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the
Total-Factor Energy Efficiency in BRI Countries: An Estimation Based on Three-Stage DEA Model
Directory of Open Access Journals (Sweden)
Changhong Zhao
2018-01-01
Full Text Available The Belt and Road Initiative (BRI is showing its great influence and leadership on the international energy cooperation. Based on the three-stage DEA model, total-factor energy efficiency (TFEE in 35 BRI countries in 2015 was measured in this article. It shows that the three-stage DEA model could eliminate errors of environment variable and random, which made the result better than traditional DEA model. When environment variable errors and random errors were eliminated, the mean value of TFEE was declined. It demonstrated that TFEE of the whole sample group was overestimated because of external environment impacts and random errors. The TFEE indicators of high-income countries like South Korea, Singapore, Israel and Turkey are 1, which is in the efficiency frontier. The TFEE indicators of Russia, Saudi Arabia, Poland and China are over 0.8. And the indicators of Uzbekistan, Ukraine, South Africa and Bulgaria are in a low level. The potential of energy-saving and emissions reduction is great in countries with low TFEE indicators. Because of the gap in energy efficiency, it is necessary to distinguish different countries in the energy technology options, development planning and regulation in BRI countries.
DEFF Research Database (Denmark)
Guzinski, Radoslaw; Anderson, M.C.; Kustas, W.P.
2013-01-01
The Dual Temperature Difference (DTD) model, introduced by Norman et al. (2000), uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST) to estimate surface energy fluxes. By using a time-differential temperature...... agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the Danish Hydrological ObsErvatory (HOBE) in western Denmark, indicating realistic patterns based on land use....
Abdulhameed, Mohanad F; Habib, Ihab; Al-Azizz, Suzan A; Robertson, Ian
2018-02-01
Cystic echinococcosis (CE) is a highly endemic parasitic zoonosis in Iraq with substantial impacts on livestock productivity and human health. The objectives of this study were to study the abattoir-based occurrence of CE in marketed offal of sheep in Basrah province, Iraq, and to estimate, using a probabilistic modelling approach, the direct economic losses due to hydatid cysts. Based on detailed visual meat inspection, results from an active abattoir survey in this study revealed detection of hydatid cysts in 7.3% (95% CI: 5.4; 9.6) of 631 examined sheep carcasses. Post-mortem lesions of hydatid cyst were concurrently present in livers and lungs of more than half (54.3% (25/46)) of the positive sheep. Direct economic losses due to hydatid cysts in marketed offal were estimated using data from government reports, the one abattoir survey completed in this study, and expert opinions of local veterinarians and butchers. A Monte-Carlo simulation model was developed in a spreadsheet utilizing Latin Hypercube sampling to account for uncertainty in the input parameters. The model estimated that the average annual economic losses associated with hydatid cysts in the liver and lungs of sheep marketed for human consumption in Basrah to be US$72,470 (90% Confidence Interval (CI); ±11,302). The mean proportion of annual losses in meat products value (carcasses and offal) due to hydatid cysts in the liver and lungs of sheep marketed in Basrah province was estimated as 0.42% (90% CI; ±0.21). These estimates suggest that CE is responsible for considerable livestock-associated monetary losses in the south of Iraq. These findings can be used to inform different regional CE control program options in Iraq.
Lotfata, A.; Ambinakudige, S.
2017-12-01
Coastal regions face a higher risk of flooding. A rise in sea-level increases flooding chances in low-lying areas. A major concern is the effect of sea-level rise on the depth of the fresh water/salt water interface in the aquifers of the coastal regions. A sea-level change rise impacts the hydrological system of the aquifers. Salt water intrusion into fresh water aquifers increase water table levels. Flooding prone areas in the coast are at a higher risk of salt water intrusion. The Gulf coast is one of the most vulnerable flood areas due to its natural weather patterns. There is not yet a local assessment of the relation between groundwater level and sea-level rising. This study investigates the projected sea-level rise models and the anomalous groundwater level during January 2002 to December 2016. We used the NASA Gravity Recovery and Climate Experiment (GRACE) and Global Land Data Assimilation System (GLDAS) satellite data in the analysis. We accounted the leakage error and the measurement error in GRACE data. GLDAS data was used to calculate the groundwater storage from the total water storage estimated using GRACE data (ΔGW=ΔTWS (soil moisture, surface water, groundwater, and canopy water) - ΔGLDAS (soil moisture, surface water, and canopy water)). The preliminary results indicate that the total water storage is increasing in parts of the Gulf of Mexico. GRACE data show high soil wetness and groundwater levels in Mississippi, Alabama and Texas coasts. Because sea-level rise increases the probability of flooding in the Gulf coast and affects the groundwater, we will analyze probable interactions between sea-level rise and groundwater in the study area. To understand regional sea-level rise patterns, we will investigate GRACE Ocean data along the Gulf coasts. We will quantify ocean total water storage, its salinity, and its relationship with the groundwater level variations in the Gulf coast.
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
Directory of Open Access Journals (Sweden)
Junjie Ren
2013-01-01
Full Text Available Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9 occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF and the Guanxian-Jiangyou fault (GJF. However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS and Interferometric Synthetic Aperture Radar (InSAR data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3 × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
Vitale, M.; Matteucci, G.; Fares, S.; Davison, B.
2009-02-01
This paper concerns the application of a process-based model (MOCA, Modelling of Carbon Assessment) as an useful tool for estimating gas exchange, and integrating the empirical algorithms for calculation of monoterpene fluxes, in a Mediterranean maquis of central Italy (Castelporziano, Rome). Simulations were carried out for a range of hypothetical but realistic canopies of the evergreen Quercus ilex (holm oak), Arbutus unedo (strawberry tree) and Phillyrea latifolia. More, the dependence on total leaf area and leaf distribution of monoterpene fluxes at the canopy scale has been considered in the algorithms. Simulation of the gas exchange rates showed higher values for P. latifolia and A. unedo (2.39±0.30 and 3.12±0.27 gC m-2 d-1, respectively) with respect to Q. ilex (1.67±0.08 gC m-2 d-1) in the measuring campaign (May-June). Comparisons of the average Gross Primary Production (GPP) values with those measured by eddy covariance were well in accordance (7.98±0.20 and 6.00±1.46 gC m-2 d-1, respectively, in May-June), although some differences (of about 30%) were evident in a point-to-point comparison. These differences could be explained by considering the non uniformity of the measuring site where diurnal winds blown S-SW direction affecting thus calculations of CO2 and water fluxes. The introduction of some structural parameters in the algorithms for monoterpene calculation allowed to simulate monoterpene emission rates and fluxes which were in accord to those measured (6.50±2.25 vs. 9.39±4.5μg g-1DW h-1 for Q. ilex, and 0.63±0.207μg g-1DW h-1 vs. 0.98±0.30μg g-1DW h-1 for P. latifolia). Some constraints of the MOCA model are discussed, but it is demonstrated to be an useful tool to simulate physiological processes and BVOC fluxes in a very complicated plant distributions and environmental conditions, and necessitating also of a low number of input data.
A General Model for Estimating Macroevolutionary Landscapes.
Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef
2018-03-01
The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].
Alexander, Cici; Korstjens, Amanda H.; Hill, Ross A.
2018-03-01
Tree or canopy height is an important attribute for carbon stock estimation, forest management and habitat quality assessment. Airborne Laser Scanning (ALS) based on Light Detection and Ranging (LiDAR) has advantages over other remote sensing techniques for describing the structure of forests. However, sloped terrain can be challenging for accurate estimation of tree locations and heights based on a Canopy Height Model (CHM) generated from ALS data; a CHM is a height-normalised Digital Surface Model (DSM) obtained by subtracting a Digital Terrain Model (DTM) from a DSM. On sloped terrain, points at the same elevation on a tree crown appear to increase in height in the downhill direction, based on the ground elevations at these points. A point will be incorrectly identified as the treetop by individual tree crown (ITC) recognition algorithms if its height is greater than that of the actual treetop in the CHM, which will be recorded as the tree height. In this study, the influence of terrain slope and crown characteristics on the detection of treetops and estimation of tree heights is assessed using ALS data in a tropical forest with complex terrain (i.e. micro-topography) and tree crown characteristics. Locations and heights of 11,442 trees based on a DSM are compared with those based on a CHM. The horizontal (DH) and vertical displacements (DV) increase with terrain slope (r = 0.47 and r = 0.54 respectively, p tree height are up to 16.6 m on slopes greater than 50° in our study area in Sumatra. The errors in locations (DH) and tree heights (DV) are modelled for trees with conical and spherical tree crowns. For a spherical tree crown, DH can be modelled as R sin θ, and DV as R (sec θ - 1). In this study, a model is developed for an idealised conical tree crown, DV = R (tan θ - tan ψ), where R is the crown radius, and θ and ψ are terrain and crown angles respectively. It is shown that errors occur only when terrain angle exceeds the crown angle, with the
Jiang, Dong; Hao, Mengmeng; Fu, Jingying; Tian, Guangjin; Ding, Fangyu
2017-09-01
Global warming and increasing concentration of atmospheric greenhouse gas (GHG) have prompted considerable interest in the potential role of energy plant biomass. Cassava-based fuel ethanol is one of the most important bioenergy and has attracted much attention in both developed and developing countries. However, the development of cassava-based fuel ethanol is still faced with many uncertainties, including raw material supply, net energy potential, and carbon emission mitigation potential. Thus, an accurate estimation of these issues is urgently needed. This study provides an approach to estimate energy saving and carbon emission mitigation potentials of cassava-based fuel ethanol through LCA (life cycle assessment) coupled with a biogeochemical process model—GEPIC (GIS-based environmental policy integrated climate) model. The results indicate that the total potential of cassava yield on marginal land in China is 52.51 million t; the energy ratio value varies from 0.07 to 1.44, and the net energy surplus of cassava-based fuel ethanol in China is 92,920.58 million MJ. The total carbon emission mitigation from cassava-based fuel ethanol in China is 4593.89 million kgC. Guangxi, Guangdong, and Fujian are identified as target regions for large-scale development of cassava-based fuel ethanol industry. These results can provide an operational approach and fundamental data for scientific research and energy planning.
Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei
2018-01-01
Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.
Bennani, Aziza; El-Kettani, Amina; Hançali, Amina; El-Rhilani, Houssine; Alami, Kamal; Youbi, Mohamed; Rowley, Jane; Abu-Raddad, Laith; Smolak, Alex; Taylor, Melanie; Mahiané, Guy; Stover, John
2017-01-01
Background Evolving health priorities and resource constraints mean that countries require data on trends in sexually transmitted infections (STI) burden, to inform program planning and resource allocation. We applied the Spectrum STI estimation tool to estimate the prevalence and incidence of active syphilis in adult women in Morocco over 1995 to 2016. The results from the analysis are being used to inform Morocco’s national HIV/STI strategy, target setting and program evaluation. Methods Syphilis prevalence levels and trends were fitted through logistic regression to data from surveys in antenatal clinics, women attending family planning clinics and other general adult populations, as available post-1995. Prevalence data were adjusted for diagnostic test performance, and for the contribution of higher-risk populations not sampled in surveys. Incidence was inferred from prevalence by adjusting for the average duration of infection with active syphilis. Results In 2016, active syphilis prevalence was estimated to be 0.56% in women 15 to 49 years of age (95% confidence interval, CI: 0.3%-1.0%), and around 21,675 (10,612–37,198) new syphilis infections have occurred. The analysis shows a steady decline in prevalence from 1995, when the prevalence was estimated to be 1.8% (1.0–3.5%). The decline was consistent with decreasing prevalences observed in TB patients, fishermen and prisoners followed over 2000–2012 through sentinel surveillance, and with a decline since 2003 in national HIV incidence estimated earlier through independent modelling. Conclusions Periodic population-based surveys allowed Morocco to estimate syphilis prevalence and incidence trends. This first-ever undertaking engaged and focused national stakeholders, and confirmed the still considerable syphilis burden. The latest survey was done in 2012 and so the trends are relatively uncertain after 2012. From 2017 Morocco plans to implement a system to record data from routine antenatal
Bennani, Aziza; El-Kettani, Amina; Hançali, Amina; El-Rhilani, Houssine; Alami, Kamal; Youbi, Mohamed; Rowley, Jane; Abu-Raddad, Laith; Smolak, Alex; Taylor, Melanie; Mahiané, Guy; Stover, John; Korenromp, Eline L
2017-01-01
Evolving health priorities and resource constraints mean that countries require data on trends in sexually transmitted infections (STI) burden, to inform program planning and resource allocation. We applied the Spectrum STI estimation tool to estimate the prevalence and incidence of active syphilis in adult women in Morocco over 1995 to 2016. The results from the analysis are being used to inform Morocco's national HIV/STI strategy, target setting and program evaluation. Syphilis prevalence levels and trends were fitted through logistic regression to data from surveys in antenatal clinics, women attending family planning clinics and other general adult populations, as available post-1995. Prevalence data were adjusted for diagnostic test performance, and for the contribution of higher-risk populations not sampled in surveys. Incidence was inferred from prevalence by adjusting for the average duration of infection with active syphilis. In 2016, active syphilis prevalence was estimated to be 0.56% in women 15 to 49 years of age (95% confidence interval, CI: 0.3%-1.0%), and around 21,675 (10,612-37,198) new syphilis infections have occurred. The analysis shows a steady decline in prevalence from 1995, when the prevalence was estimated to be 1.8% (1.0-3.5%). The decline was consistent with decreasing prevalences observed in TB patients, fishermen and prisoners followed over 2000-2012 through sentinel surveillance, and with a decline since 2003 in national HIV incidence estimated earlier through independent modelling. Periodic population-based surveys allowed Morocco to estimate syphilis prevalence and incidence trends. This first-ever undertaking engaged and focused national stakeholders, and confirmed the still considerable syphilis burden. The latest survey was done in 2012 and so the trends are relatively uncertain after 2012. From 2017 Morocco plans to implement a system to record data from routine antenatal programmatic screening, which should help update and re
Walsh, T.; Layton, T.; Mellor, J. E.
2017-12-01
Storm damage to the electric grid impacts 23 million electric utility customers and costs US consumers $119 billion annually. Current restoration techniques rely on the past experiences of emergency managers. There are few analytical simulation and prediction tools available for utility managers to optimize storm recovery and decrease consumer cost, lost revenue and restoration time. We developed an agent based model (ABM) for storm recovery in Connecticut. An ABM is a computer modeling technique comprised of agents who are given certain behavioral rules and operate in a given environment. It allows the user to simulate complex systems by varying user-defined parameters to study emergent, unpredicted behavior. The ABM incorporates the road network and electric utility grid for the state, is validated using actual storm event recoveries and utilizes the Dijkstra routing algorithm to determine the best path for repair crews to travel between outages. The ABM has benefits for both researchers and utility managers. It can simulate complex system dynamics, rank variable importance, find tipping points that could significantly reduce restoration time or costs and test a broad range of scenarios. It is a modular, scalable and adaptable technique that can simulate scenarios in silico to inform emergency managers before and during storm events to optimize restoration strategies and better manage expectations of when power will be restored. Results indicate that total restoration time is strongly dependent on the number of crews. However, there is a threshold whereby more crews will not decrease the restoration time, which depends on the total number of outages. The addition of outside crews is more beneficial for storms with a higher number of outages. The time to restoration increases linearly with increasing repair time, while the travel speed has little overall effect on total restoration time. Crews traveling to the nearest outage reduces the total restoration time
Jain, M.; Singh, B.; Srivastava, A.; Lobell, D. B.
2015-12-01
Food security will be challenged over the upcoming decades due to increased food demand, natural resource degradation, and climate change. In order to identify potential solutions to increase food security in the face of these changes, tools that can rapidly and accurately assess farm productivity are needed. With this aim, we have developed generalizable methods to map crop yields at the field scale using a combination of satellite imagery and crop models, and implement this approach within Google Earth Engine. We use these methods to examine wheat yield trends in Northern India, which provides over 15% of the global wheat supply and where over 80% of farmers rely on wheat as a staple food source. In addition, we identify the extent to which farmers are shifting sow date in response to heat stress, and how well shifting sow date reduces the negative impacts of heat stress on yield. To identify local-level decision-making, we map wheat sow date and yield at a high spatial resolution (30 m) using Landsat satellite imagery from 1980 to the present. This unique dataset allows us to examine sow date decisions at the field scale over 30 years, and by relating these decisions to weather experienced over the same time period, we can identify how farmers learn and adapt cropping decisions based on weather through time.
Redemann, J.; Livingston, J.; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; LeBlanc, S.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.;
2014-01-01
We have developed a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. We compare the spatio-temporal distribution of our multi-sensor aerosol retrievals and calculations of seasonal clear-sky aerosol radiative forcing based on the aerosol retrievals to values derived from four models that participated in the latest AeroCom model intercomparison initiative. We find significant inter-model differences, in particular for the aerosol single scattering albedo, which can be evaluated using the multi-sensor A-Train retrievals. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.
MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.
Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela
2013-05-01
Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H
2017-09-01
Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides
Y. He; Q. Zhuang; A.D. McGuire; Y. Liu; M. Chen
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations inmodeling regional carbon dynamics and explore the...
Energy Technology Data Exchange (ETDEWEB)
Stetzel, KD; Aldrich, LL; Trimboli, MS; Plett, GL
2015-03-15
This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables. (C) 2014 Elsevier B.V. All rights reserved.
Saurer, Matthias; Renato, Spahni; Fortunat, Joos; David, Frank; Kerstin, Treydte; Rolf, Siegwolf
2015-04-01
Tree-ring d13C-based estimates of intrinsic water-use efficiency (iWUE, reflecting the ratio of assimilation A to stomatal conductance gs) generally show a strong increase during the industrial period, likely associated with the increase in atmospheric CO2. However, it is not clear, first, if tree-ring d13C-derived iWUE-values indeed reflect actual plant and ecosystem-scale variability in fluxes and, second, what physiological changes were the drivers of the observed iWUE increase, changes in A or gs or both. To address these questions, we used a complex dynamic vegetation model (LPX) that combines process-based vegetation dynamics with land-atmosphere carbon and water exchange. The analysis was conducted for three functional types, representing conifers, oaks, larch, and various sites in Europe, where tree-ring isotope data are available. The increase in iWUE over the 20th century was comparable in LPX-simulations as in tree-ring-estimates, strengthening confidence in these results. Furthermore, the results from the LPX model suggest that the cause of the iWUE increase was reduced stomatal conductance during recent decades rather than increased assimilation. High-frequency variation reflects the influence of climate, like for example the 1976 summer drought, resulting in strongly reduced A and g in the model, particularly for oak.
Energy Technology Data Exchange (ETDEWEB)
Mandelli, M. [Proing Italia, Torbole sul Garda, Trento (Italy); Rinaldi, C. [ERSE, Milan (Italy); Vacchieri, E. [Ansaldo Energia S.p.A., Genoa (Italy)
2010-07-01
In the frame of the collaborative program COST 538 a coating life prediction code was implemented by Proing and ERSE with an inverse problem solution routine able to calculate the local mean operating temperature from the operating conditions and the extension of the coating depleted regions. Moreover base material degradation models were developed by Ansaldo Energia on both equiaxed and single crystal superalloys. This paper describes the application of such methodologies to two ex-service 1st stage gas turbine blades delivered to COST 538 by AEN after operation in two different plants with different operating conditions. The objective of the study was the application and validation of an innovative NDT and the estimate of the mean operating temperature at different positions of the components. The destructive metallographic analysis of the blades let to validate the non destructive frequency scanning eddy current technique (F-SECT). Coating life modelling results are compared with those of the base material degradation models. An interesting correlation was found between the estimated temperatures with the two methods and also with the NDT findings at the most significant component positions. (orig.)
Nonparametric estimation in models for unobservable heterogeneity
Hohmann, Daniel
2014-01-01
Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.
MCMC estimation of multidimensional IRT models
Beguin, Anton; Glas, Cornelis A.W.
1998-01-01
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will
International Nuclear Information System (INIS)
Mededovic Thagard, Selma; Stratton, Gunnar R; Paek, Eunsu; Dai, Fei; Holsen, Thomas M; Bellona, Christopher L; Bohl, Douglas G; Dickenson, Eric R V
2017-01-01
To determine the types of applications for which plasma-based water treatment (PWT) is best suited, the treatability of 23 environmental contaminants was assessed through treatment in a gas discharge reactor with argon bubbling, termed the enhanced-contact reactor. The contaminants were treated in a mixture to normalize reaction conditions and convective transport limitations. Treatability was compared in terms of the observed removal rate constant ( k obs ). To characterize the influence of interfacial processes on k obs , a model was developed that accurately predicts k obs for each compound, as well as the contributions to k obs from each of the three general degradation mechanisms thought to occur at or near the gas–liquid interface: ‘sub-surface’, ‘surface’ and ‘above-surface’. Sub-surface reactions occur just underneath the gas–liquid interface between the contaminants and dissolved plasma-generated radicals, contributing significantly to the removal of compounds that lack surfactant-like properties and so are not highly concentrated at the interface. Surface reactions occur at the interface between the contaminants and dissolved radicals, contributing significantly to the removal of surfactant-like compounds that have high interfacial concentrations. The contaminants’ interfacial concentrations were calculated using surface-activity parameters determined through surface tension measurements. Above-surface reactions are proposed to take place in the plasma interior between highly energetic plasma species and exposed portions of compounds that extend out of the interface. This mechanism largely accounts for the degradation of surfactant-like contaminants that contain highly hydrophobic perfluorocarbon groups, which are most likely to protrude from the interface. For a few compounds, the degree of exposure to the plasma interior was supported by new and previously reported molecular dynamics simulations results. By reviewing the predicted
Mededovic Thagard, Selma; Stratton, Gunnar R.; Dai, Fei; Bellona, Christopher L.; Holsen, Thomas M.; Bohl, Douglas G.; Paek, Eunsu; Dickenson, Eric R. V.
2017-01-01
To determine the types of applications for which plasma-based water treatment (PWT) is best suited, the treatability of 23 environmental contaminants was assessed through treatment in a gas discharge reactor with argon bubbling, termed the enhanced-contact reactor. The contaminants were treated in a mixture to normalize reaction conditions and convective transport limitations. Treatability was compared in terms of the observed removal rate constant (k obs). To characterize the influence of interfacial processes on k obs, a model was developed that accurately predicts k obs for each compound, as well as the contributions to k obs from each of the three general degradation mechanisms thought to occur at or near the gas-liquid interface: ‘sub-surface’, ‘surface’ and ‘above-surface’. Sub-surface reactions occur just underneath the gas-liquid interface between the contaminants and dissolved plasma-generated radicals, contributing significantly to the removal of compounds that lack surfactant-like properties and so are not highly concentrated at the interface. Surface reactions occur at the interface between the contaminants and dissolved radicals, contributing significantly to the removal of surfactant-like compounds that have high interfacial concentrations. The contaminants’ interfacial concentrations were calculated using surface-activity parameters determined through surface tension measurements. Above-surface reactions are proposed to take place in the plasma interior between highly energetic plasma species and exposed portions of compounds that extend out of the interface. This mechanism largely accounts for the degradation of surfactant-like contaminants that contain highly hydrophobic perfluorocarbon groups, which are most likely to protrude from the interface. For a few compounds, the degree of exposure to the plasma interior was supported by new and previously reported molecular dynamics simulations results. By reviewing the predicted
Estimation of Stochastic Volatility Models by Nonparametric Filtering
DEFF Research Database (Denmark)
Kanaya, Shin; Kristensen, Dennis
2016-01-01
/estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...
Estimating North Dakota's Economic Base
Coon, Randal C.; Leistritz, F. Larry
2009-01-01
North Dakota’s economic base is comprised of those activities producing a product paid for by nonresidents, or products exported from the state. North Dakota’s economic base activities include agriculture, mining, manufacturing, tourism, and federal government payments for construction and to individuals. Development of the North Dakota economic base data is important because it provides the information to quantify the state’s economic growth, and it creates the final demand sectors for the N...
DEFF Research Database (Denmark)
Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe
2016-01-01
% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population......It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive...... populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90...
Energy Technology Data Exchange (ETDEWEB)
Zhang, S; Politte, D; O’Sullivan, J [Washington University in St. Louis, St. Louis, MO (United States); Han, D; Porras-Chaverri, M; Williamson, J [Virginia Commonwealth University, Richmond, VA (United States); Whiting, B [University of Pittsburgh, Pittsburgh, PA (United States)
2016-06-15
Purpose: This work aims at reducing the uncertainty in proton stopping power (SP) estimation by a novel combination of a linear, separable basis vector model (BVM) for stopping power calculation (Med Phys 43:600) and a statistical, model-based dual-energy CT (DECT) image reconstruction algorithm (TMI 35:685). The method was applied to experimental data. Methods: BVM assumes the photon attenuation coefficients, electron densities, and mean excitation energies (I-values) of unknown materials can be approximated by a combination of the corresponding quantities of two reference materials. The DECT projection data for a phantom with 5 different known materials was collected on a Philips Brilliance scanner using two scans at 90 kVp and 140 kVp. The line integral alternating minimization (LIAM) algorithm was used to recover the two BVM coefficient images using the measured source spectra. The proton stopping powers are then estimated from the Bethe-Bloch equation using electron densities and I-values derived from the BVM coefficients. The proton stopping powers and proton ranges for the phantom materials estimated via our BVM based DECT method are compared to ICRU reference values and a post-processing DECT analysis (Yang PMB 55:1343) applied to vendorreconstructed images using the Torikoshi parametric fit model (tPFM). Results: For the phantom materials, the average stopping power estimations for 175 MeV protons derived from our method are within 1% of the ICRU reference values (except for Teflon with a 1.48% error), with an average standard deviation of 0.46% over pixels. The resultant proton ranges agree with the reference values within 2 mm. Conclusion: Our principled DECT iterative reconstruction algorithm, incorporating optimal beam hardening and scatter corrections, in conjunction with a simple linear BVM model, achieves more accurate and robust proton stopping power maps than the post-processing, nonlinear tPFM based DECT analysis applied to conventional
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2009-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2010-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Energy Technology Data Exchange (ETDEWEB)
Lumen, A., E-mail: Annie.Lumen@fda.hhs.gov [Division of Biochemical Toxicology, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR 72079 (United States); George, N.I., E-mail: Nysia.George@fda.hhs.gov [Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR 72079 (United States)
2017-01-01
Previously, a deterministic biologically-based dose-response (BBDR) pregnancy model was developed to evaluate moderate thyroid axis disturbances with and without thyroid-active chemical exposure in a near-term pregnant woman and fetus. In the current study, the existing BBDR model was adapted to include a wider functional range of iodine nutrition, including more severe iodine deficiency conditions, and to incorporate empirically the effects of homeostatic mechanisms. The extended model was further developed into a population-based model and was constructed using a Monte Carlo-based probabilistic framework. In order to characterize total (T4) and free (fT4) thyroxine levels for a given iodine status at the population-level, the distribution of iodine intake for late-gestation pregnant women in the U.S was reconstructed using various reverse dosimetry methods and available biomonitoring data. The range of median (mean) iodine intake values resulting from three different methods of reverse dosimetry tested was 196.5–219.9 μg of iodine/day (228.2–392.9 μg of iodine/day). There was minimal variation in model-predicted maternal serum T4 and ft4 thyroxine levels from use of the three reconstructed distributions of iodine intake; the range of geometric mean for T4 and fT4, was 138–151.7 nmol/L and 7.9–8.7 pmol/L, respectively. The average value of the ratio of the 97.5th percentile to the 2.5th percentile equaled 3.1 and agreed well with similar estimates from recent observations in third-trimester pregnant women in the U.S. In addition, the reconstructed distributions of iodine intake allowed us to estimate nutrient inadequacy for late-gestation pregnant women in the U.S. via the probability approach. The prevalence of iodine inadequacy for third-trimester pregnant women in the U.S. was estimated to be between 21% and 44%. Taken together, the current work provides an improved tool for evaluating iodine nutritional status and the corresponding thyroid function
Lumen, A; George, N I
2017-01-01
Previously, a deterministic biologically-based dose-response (BBDR) pregnancy model was developed to evaluate moderate thyroid axis disturbances with and without thyroid-active chemical exposure in a near-term pregnant woman and fetus. In the current study, the existing BBDR model was adapted to include a wider functional range of iodine nutrition, including more severe iodine deficiency conditions, and to incorporate empirically the effects of homeostatic mechanisms. The extended model was further developed into a population-based model and was constructed using a Monte Carlo-based probabilistic framework. In order to characterize total (T4) and free (fT4) thyroxine levels for a given iodine status at the population-level, the distribution of iodine intake for late-gestation pregnant women in the U.S was reconstructed using various reverse dosimetry methods and available biomonitoring data. The range of median (mean) iodine intake values resulting from three different methods of reverse dosimetry tested was 196.5-219.9μg of iodine/day (228.2-392.9μg of iodine/day). There was minimal variation in model-predicted maternal serum T4 and ft4 thyroxine levels from use of the three reconstructed distributions of iodine intake; the range of geometric mean for T4 and fT4, was 138-151.7nmol/L and 7.9-8.7pmol/L, respectively. The average value of the ratio of the 97.5th percentile to the 2.5th percentile equaled 3.1 and agreed well with similar estimates from recent observations in third-trimester pregnant women in the U.S. In addition, the reconstructed distributions of iodine intake allowed us to estimate nutrient inadequacy for late-gestation pregnant women in the U.S. via the probability approach. The prevalence of iodine inadequacy for third-trimester pregnant women in the U.S. was estimated to be between 21% and 44%. Taken together, the current work provides an improved tool for evaluating iodine nutritional status and the corresponding thyroid function status in
Estimating Canopy Dark Respiration for Crop Models
Monje Mejia, Oscar Alberto
2014-01-01
Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.
Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M
2016-08-01
Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.
Lindaas, J.; Commane, R.; Luus, K. A.; Chang, R. Y. W.; Miller, C. E.; Dinardo, S. J.; Henderson, J.; Mountain, M. E.; Karion, A.; Sweeney, C.; Miller, J. B.; Lin, J. C.; Daube, B. C.; Pittman, J. V.; Wofsy, S. C.
2014-12-01
The Alaskan region has historically been a sink of atmospheric CO2, but permafrost currently stores large amounts of carbon that are vulnerable to release to the atmosphere as northern high-latitudes continue to warm faster than the global average. We use aircraft CO2 data with a remote-sensing based model driven by MODIS satellite products and validated by CO2 flux tower data to calculate average daily CO2 fluxes for the region of Alaska during the growing seasons of 2012 and 2013. Atmospheric trace gases were measured during CARVE (Carbon in Arctic Reservoirs Vulnerability Experiment) aboard the NASA Sherpa C-23 aircraft. For profiles along the flight track, we couple the Weather Research and Forecasting (WRF) model with the Stochastic Time-Inverted Lagrangian Transport (STILT) model, and convolve these footprints of surface influence with our remote-sensing based model, the Polar Vegetation Photosynthesis Respiration Model (PolarVPRM). We are able to calculate average regional fluxes for each month by minimizing the difference between the data and model column integrals. Our results provide a snapshot of the current state of regional Alaskan growing season net ecosystem exchange (NEE). We are able to begin characterizing the interannual variation in Alaskan NEE and to inform future refinements in process-based modeling that will produce better estimates of past, present, and future pan-Arctic NEE. Understanding if/when/how the Alaskan region transitions from a sink to a source of CO2 is crucial to predicting the trajectory of future climate change.
Resource-estimation models and predicted discovery
International Nuclear Information System (INIS)
Hill, G.W.
1982-01-01
Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)
DEFF Research Database (Denmark)
Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens
and uncertainty analysis, in general, is developed and used. In total 21 properties of pure components, which include normal boiling point, critical constants, normal melting point among others have been analysed. The statistical analysis of the model performance for these properties is highlighted through...... several illustrative examples. Important issues related to property modeling such as thermodynamic consistency of the predicted properties (relation of normal boiling point versus critical temperature etc.) are analysed. The developed methodology is simple, yet sound and effective and provides not only...
Sun, Wenchao; Ishidaira, Hiroshi; Bastola, Satish; Yu, Jingshan
2015-05-01
Lacking observation data for calibration constrains applications of hydrological models to estimate daily time series of streamflow. Recent improvements in remote sensing enable detection of river water-surface width from satellite observations, making possible the tracking of streamflow from space. In this study, a method calibrating hydrological models using river width derived from remote sensing is demonstrated through application to the ungauged Irrawaddy Basin in Myanmar. Generalized likelihood uncertainty estimation (GLUE) is selected as a tool for automatic calibration and uncertainty analysis. Of 50,000 randomly generated parameter sets, 997 are identified as behavioral, based on comparing model simulation with satellite observations. The uncertainty band of streamflow simulation can span most of 10-year average monthly observed streamflow for moderate and high flow conditions. Nash-Sutcliffe efficiency is 95.7% for the simulated streamflow at the 50% quantile. These results indicate that application to the target basin is generally successful. Beyond evaluating the method in a basin lacking streamflow data, difficulties and possible solutions for applications in the real world are addressed to promote future use of the proposed method in more ungauged basins. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Snoek, Dennis J.W.
2016-01-01
Ammonia (NH3) emission is still high, and agriculture is still the dominant contributor. In The Netherlands, the NH3 emission from dairy cow houses is one of the most important sources. A lot of research has been conducted to understand and model NH3 emission, to measure it, and to reduce it
Directory of Open Access Journals (Sweden)
R. Schinke
2012-09-01
Full Text Available The analysis and management of flood risk commonly focuses on surface water floods, because these types are often associated with high economic losses due to damage to buildings and settlements. The rising groundwater as a secondary effect of these floods induces additional damage, particularly in the basements of buildings. Mostly, these losses remain underestimated, because they are difficult to assess, especially for the entire building stock of flood-prone urban areas. For this purpose an appropriate methodology has been developed and lead to a groundwater damage simulation model named GRUWAD. The overall methodology combines various engineering and geoinformatic methods to calculate major damage processes by high groundwater levels. It considers a classification of buildings by building types, synthetic depth-damage functions for groundwater inundation as well as the results of a groundwater-flow model. The modular structure of this procedure can be adapted in the level of detail. Hence, the model allows damage calculations from the local to the regional scale. Among others it can be used to prepare risk maps, for ex-ante analysis of future risks, and to simulate the effects of mitigation measures. Therefore, the model is a multifarious tool for determining urban resilience with respect to high groundwater levels.
Improved diagnostic model for estimating wind energy
Energy Technology Data Exchange (ETDEWEB)
Endlich, R.M.; Lee, J.D.
1983-03-01
Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...
Directory of Open Access Journals (Sweden)
Kyu Sik Jung
Full Text Available Preoperative liver stiffness (LS measurement using transient elastography (TE is useful for predicting late recurrence after curative resection of hepatocellular carcinoma (HCC. We developed and validated a novel LS value-based predictive model for late recurrence of HCC.Patients who were due to undergo curative resection of HCC between August 2006 and January 2010 were prospectively enrolled and TE was performed prior to operations by study protocol. The predictive model of late recurrence was constructed based on a multiple logistic regression model. Discrimination and calibration were used to validate the model.Among a total of 139 patients who were finally analyzed, late recurrence occurred in 44 patients, with a median follow-up of 24.5 months (range, 12.4-68.1. We developed a predictive model for late recurrence of HCC using LS value, activity grade II-III, presence of multiple tumors, and indocyanine green retention rate at 15 min (ICG R15, which showed fairly good discrimination capability with an area under the receiver operating characteristic curve (AUROC of 0.724 (95% confidence intervals [CIs], 0.632-0.816. In the validation, using a bootstrap method to assess discrimination, the AUROC remained largely unchanged between iterations, with an average AUROC of 0.722 (95% CIs, 0.718-0.724. When we plotted a calibration chart for predicted and observed risk of late recurrence, the predicted risk of late recurrence correlated well with observed risk, with a correlation coefficient of 0.873 (P<0.001.A simple LS value-based predictive model could estimate the risk of late recurrence in patients who underwent curative resection of HCC.