WorldWideScience

Sample records for model input errors

  1. Input/output error analyzer

    Science.gov (United States)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  2. Error analysis of the quantification of hepatic perfusion using a dual-input single-compartment model

    Science.gov (United States)

    Miyazaki, Shohei; Yamazaki, Youichi; Murase, Kenya

    2008-11-01

    We performed an error analysis of the quantification of liver perfusion from dynamic contrast-enhanced computed tomography (DCE-CT) data using a dual-input single-compartment model for various disease severities, based on computer simulations. In the simulations, the time-density curves (TDCs) in the liver were generated from an actually measured arterial input function using a theoretical equation describing the kinetic behavior of the contrast agent (CA) in the liver. The rate constants for the transfer of CA from the hepatic artery to the liver (K1a), from the portal vein to the liver (K1p), and from the liver to the plasma (k2) were estimated from simulated TDCs with various plasma volumes (V0s). To investigate the effect of the shapes of input functions, the original arterial and portal-venous input functions were stretched in the time direction by factors of 2, 3 and 4 (stretching factors). The above parameters were estimated with the linear least-squares (LLSQ) and nonlinear least-squares (NLSQ) methods, and the root mean square errors (RMSEs) between the true and estimated values were calculated. Sensitivity and identifiability analyses were also performed. The RMSE of V0 was the smallest, followed by those of K1a, k2 and K1p in an increasing order. The RMSEs of K1a, K1p and k2 increased with increasing V0, while that of V0 tended to decrease. The stretching factor also affected parameter estimation in both methods. The LLSQ method estimated the above parameters faster and with smaller variations than the NLSQ method. Sensitivity analysis showed that the magnitude of the sensitivity function of V0 was the greatest, followed by those of K1a, K1p and k2 in a decreasing order, while the variance of V0 obtained from the covariance matrices was the smallest, followed by those of K1a, K1p and k2 in an increasing order. The magnitude of the sensitivity function and the variance increased and decreased, respectively, with increasing disease severity and decreased

  3. Identification of a Manipulator Model Using the Input Error Method in the Mathematica Program

    Directory of Open Access Journals (Sweden)

    Leszek CEDRO

    2009-06-01

    Full Text Available The problem of parameter identification for a four-degree-of-freedom robot was solved using the Mathematica program. The identification was performed by means of specially developed differential filters [1]. Using the example of a manipulator, we analyze the capabilities of the Mathematica program that can be applied to solve problems related to the modeling, control, simulation and identification of a system [2]. The responses of the identification process for the variables and the values of the quality function are included.

  4. APPLICATION OF FRF ESTIMATOR BASED ON ERRORS-IN-VARIABLES MODEL IN MULTI-INPUT MULTI-OUTPUT VIBRATION CONTROL SYSTEM

    Institute of Scientific and Technical Information of China (English)

    GUAN Guangfeng; CONG Dacheng; HAN Junwei; LI Hongren

    2007-01-01

    The FRF estimator based on the errors-in-variables (EV) model of multi-input multi-output (MIMO) System is presented to reduce the bias error of FRF Hl estimator. The FRF Hl estimator is influenced by the noises in the inputs of the System and generates an under-estimation of the true FRF. The FRF estimator based on the EV model takes into account the errors in both the inputs and Outputs of the System and would lead to more accurate FRF estimation. The FRF estimator based on the EV model is applied to the waveform replication on the 6-DOF (degree-of-freedom) hydraulic Vibration table. The result shows that it is favorable to improve the control precision of the MIMO Vibration control system.

  5. Assessing Spatial and Attribute Errors of Input Data in Large National Datasets for use in Population Distribution Models

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, Lauren A [ORNL; Urban, Marie L [ORNL; Myers, Aaron T [ORNL; Bhaduri, Budhendra L [ORNL; Bright, Eddie A [ORNL; Coleman, Phil R [ORNL

    2007-01-01

    Geospatial technologies and digital data have developed and disseminated rapidly in conjunction with increasing computing performance and internet availability. The ability to store and transmit large datasets has encouraged the development of national datasets in geospatial format. National datasets are used by numerous agencies for analysis and modeling purposes because these datasets are standardized, and are considered to be of acceptable accuracy. At Oak Ridge National Laboratory, a national population model incorporating multiple ancillary variables was developed and one of the inputs required is a school database. This paper examines inaccuracies present within two national school datasets, TeleAtlas North America (TANA) and National Center of Education Statistics (NCES). Schools are an important component of the population model, because they serve as locations containing dense clusters of vulnerable populations. It is therefore essential to validate the quality of the school input data, which was made possible by increasing national coverage of high resolution imagery. Schools were also chosen since a 'real-world' representation of K-12 schools for the Philadelphia School District was produced; thereby enabling 'ground-truthing' of the national datasets. Analyses found the national datasets not standardized and incomplete, containing 76 to 90% of existing schools. The temporal accuracy of enrollment values of updating national datasets resulted in 89% inaccuracy to match 2003 data. Spatial rectification was required for 87% of the NCES points, of which 58% of the errors were attributed to the geocoding process. Lastly, it was found that by combining the two national datasets together, the resultant dataset provided a more useful and accurate solution. Acknowledgment Prepared by Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee 37831-6285, managed by UT-Battelle, LLC for the U. S. Department of Energy undercontract no

  6. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  7. Decision Aids for Multiple-Decision Disease Management as Affected by Weather Input Errors

    Science.gov (United States)

    Many disease management decision support systems (DSS) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation or estimation from off-site sources, may affect model calculations and manage...

  8. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  9. Error model identification of inertial navigation platform based on errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    Liu Ming; Liu Yu; Su Baoku

    2009-01-01

    Because the real input acceleration cannot be obtained during the error model identification of inertial navigation platform, both the input and output data contain noises. In this case, the conventional regression model and the least squares (LS) method will result in bias. Based on the models of inertial navigation platform error and observation error, the errors-in-variables (EV) model and the total least squares (TLS) method are proposed to identify the error model of the inertial navigation platform. The estimation precision is improved and the result is better than the conventional regression model based LS method. The simulation results illustrate the effectiveness of the proposed method.

  10. Testing accelerometer rectification error caused by multidimensional composite inputs with double turntable centrifuge.

    Science.gov (United States)

    Guan, W; Meng, X F; Dong, X M

    2014-12-01

    Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.

  11. Analytical delay models for RLC interconnects under ramp input

    Institute of Scientific and Technical Information of China (English)

    REN Yinglei; MAO Junfa; LI Xiaochun

    2007-01-01

    Analytical delay models for Resistance Inductance Capacitance (RLC)interconnects with ramp input are presented for difierent situations,which include overdamped,underdamped and critical response cases.The errors of delay estimation using the analytical models proposed in this paper are less bv 3%in comparison to the SPICE-computed delay.These models are meaningful for the delay analysis of actual circuits in which the input signal is ramp but not ideal step input.

  12. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...... can be achieved by a Multiple-Input Multiple- Output(MIMO) whitening filter followed by a traditional BCJR algorithm. The Gauss-Markov noise model provides a reasonable approximation for co-channel interference, making it an interesting single-user detector for many multiuser communication systems...

  13. Diagnosis of the Computer-Controlled Milling Machine, Definition of the Working Errors and Input Corrections on the Basis of Mathematical Model

    Science.gov (United States)

    Starikov, A. I.; Nekrasov, R. Yu; Teploukhov, O. J.; Soloviev, I. V.; Narikov, K. A.

    2016-10-01

    Manufactures, machinery and equipment improve of constructively as science advances and technology, and requirements are improving of quality and longevity. That is, the requirements for surface quality and precision manufacturing, oil and gas equipment parts are constantly increasing. Production of oil and gas engineering products on modern machine tools with computer numerical control - is a complex synthesis of technical and electrical equipment parts, as well as the processing procedure. Technical machine part wears during operation and in the electrical part are accumulated mathematical errors. Thus, the above-mentioned disadvantages of any of the following parts of metalworking equipment affect the manufacturing process of products in general, and as a result lead to the flaw.

  14. Two-acceleration-error-input proportional -integral-derivative control for vehicle active suspension

    Directory of Open Access Journals (Sweden)

    Yucai Zhou

    2014-06-01

    Full Text Available The objective of this work is to present a new two-acceleration-error-input (TAEI proportional-integral-derivative (PID control strategy for active suspension. The novel strategy lies in the use of sprung mass acceleration and unsprung mass acceleration signals simultaneously, which are easily measured and obtained in engineering practice. Using a quarter-car model as an example, a TAEI PID controller for active suspension is established and its control parameters are optimized based on the genetic algorithm (GA, in which the fitness function is a suspension quadratic performance index. Comparative simulation shows that the proposed TAEI PID controller can achieve better comprehensive performance, stability, and robustness than a conventional single-acceleration-error-input (SAEI PID controller for the active suspension.

  15. Decision aids for multiple-decision disease management as affected by weather input errors.

    Science.gov (United States)

    Pfender, W F; Gent, D H; Mahaffee, W F; Coop, L B; Fox, A D

    2011-06-01

    Many disease management decision support systems (DSSs) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation, or estimation from off-site sources, may affect model calculations and management decision recommendations. The extent to which errors in weather inputs affect the quality of the final management outcome depends on a number of aspects of the disease management context, including whether management consists of a single dichotomous decision, or of a multi-decision process extending over the cropping season(s). Decision aids for multi-decision disease management typically are based on simple or complex algorithms of weather data which may be accumulated over several days or weeks. It is difficult to quantify accuracy of multi-decision DSSs due to temporally overlapping disease events, existence of more than one solution to optimizing the outcome, opportunities to take later recourse to modify earlier decisions, and the ongoing, complex decision process in which the DSS is only one component. One approach to assessing importance of weather input errors is to conduct an error analysis in which the DSS outcome from high-quality weather data is compared with that from weather data with various levels of bias and/or variance from the original data. We illustrate this analytical approach for two types of DSS, an infection risk index for hop powdery mildew and a simulation model for grass stem rust. Further exploration of analysis methods is needed to address problems associated with assessing uncertainty in multi-decision DSSs.

  16. "Ser" and "Estar": Corrective Input to Children's Errors of the Spanish Copula Verbs

    Science.gov (United States)

    Holtheuer, Carolina; Rendle-Short, Johanna

    2013-01-01

    Evidence for the role of corrective input as a facilitator of language acquisition is inconclusive. Studies show links between corrective input and grammatical use of some, but not other, language structures. The present study examined relationships between corrective parental input and children's errors in the acquisition of the Spanish copula…

  17. Error Models of the Analog to Digital Converters

    OpenAIRE

    Michaeli Linus; Šaliga Ján

    2014-01-01

    Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by t...

  18. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  19. The effect of uncertainty and systematic errors in hydrological modelling

    Science.gov (United States)

    Steinsland, I.; Engeland, K.; Johansen, S. S.; Øverleir-Petersen, A.; Kolberg, S. A.

    2014-12-01

    The aims of hydrological model identification and calibration are to find the best possible set of process parametrization and parameter values that transform inputs (e.g. precipitation and temperature) to outputs (e.g. streamflow). These models enable us to make predictions of streamflow. Several sources of uncertainties have the potential to hamper the possibility of a robust model calibration and identification. In order to grasp the interaction between model parameters, inputs and streamflow, it is important to account for both systematic and random errors in inputs (e.g. precipitation and temperatures) and streamflows. By random errors we mean errors that are independent from time step to time step whereas by systematic errors we mean errors that persists for a longer period. Both random and systematic errors are important in the observation and interpolation of precipitation and temperature inputs. Important random errors comes from the measurements themselves and from the network of gauges. Important systematic errors originate from the under-catch in precipitation gauges and from unknown spatial trends that are approximated in the interpolation. For streamflow observations, the water level recordings might give random errors whereas the rating curve contributes mainly with a systematic error. In this study we want to answer the question "What is the effect of random and systematic errors in inputs and observed streamflow on estimated model parameters and streamflow predictions?". To answer we test systematically the effect of including uncertainties in inputs and streamflow during model calibration and simulation in distributed HBV model operating on daily time steps for the Osali catchment in Norway. The case study is based on observations from, uncertainty carefullt quantified, and increased uncertainties and systmatical errors are done realistically by for example removing a precipitation gauge from the network.We find that the systematical errors in

  20. Error Models of the Analog to Digital Converters

    Science.gov (United States)

    Michaeli, Linus; Šaliga, Ján

    2014-04-01

    Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by the nonlinearities as a function of the output code. Progress in the microelectronics and missing information about circuital details together with the lack of knowledge about interfering effects caused by ADC installation prefers another modeling approach based on the input-output behavioral characterization by the input-output error box. Internal links in the ADC structure cause that the input-output error function could be described in a concise form by suitable function. Modeled functional parameters allow determining the integral error parameters of ADC. Paper is a survey of error models starting from the structural models for the most common architectures and their linkage with the behavioral models represented by the simple look up table or the functional description of nonlinear errors for the output codes.

  1. Error Models of the Analog to Digital Converters

    Directory of Open Access Journals (Sweden)

    Michaeli Linus

    2014-04-01

    Full Text Available Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by the nonlinearities as a function of the output code. Progress in the microelectronics and missing information about circuital details together with the lack of knowledge about interfering effects caused by ADC installation prefers another modeling approach based on the input-output behavioral characterization by the input-output error box. Internal links in the ADC structure cause that the input-output error function could be described in a concise form by suitable function. Modeled functional parameters allow determining the integral error parameters of ADC. Paper is a survey of error models starting from the structural models for the most common architectures and their linkage with the behavioral models represented by the simple look up table or the functional description of nonlinear errors for the output codes.

  2. Dominant modes via model error

    Science.gov (United States)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  3. Impact of channel estimation error on channel capacity of multiple input multiple output system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In order to investigate the impact of channel estimation error on channel capacity of multiple input multiple output (MIMO) system, a novel method is proposed to explore the channel capacity in correlated Rayleigh fading environment. A system model is constructed based on the channel estimation error at receiver side. Using the properties of Wishart distribution, the lower bound of the channel capacity is derived when the MIMO channel is of full rank. Then a method is proposed to select the optimum set of transmit antennas based on the lower bound of the mean channel capacity. The novel method can be easily implemented with low computational complexity. The simulation results show that the channel capacity of MIMO system is sensitive to channel estimation error, and is maximized when the signal-to-noise ratio increases to a certain point. Proper selection of transmit antennas can increase the channel capacity of MIMO system by about 1 bit/s in a flat fading environment with deficient rank of channel matrix.

  4. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  5. 辅助模型辨识方法(2):输入非线性输出误差系统%Auxiliary model identification methods. Part B:Input nonlinear output-error systems

    Institute of Scientific and Technical Information of China (English)

    丁锋; 陈慧波

    2016-01-01

    针对具有已知基的输入非线性输出误差系统,提出了基于过参数化模型的辅助模型递推辨识方法和辅助模型递阶辨识方法,提出了基于关键项分离的辅助模型递推辨识方法、基于关键项分离的辅助模型两阶段辨识方法和辅助模型三阶段辨识方法,提出了基于双线性参数模型分解的辅助模型随机梯度算法和基于双线性参数模型分解的辅助模型递推最小二乘算法,并给出了几个典型辨识算法的计算量、计算步骤。这些算法的收敛性分析都是需要研究的辨识课题。%For input nonlinear output⁃error systems with known bases,this paper presents the over⁃parameterization model based auxiliary model ( AM) recursive identification methods,the over⁃parameterization model based AM hi⁃erarchical identification methods,the key term separation based AM recursive identification methods,the key term separation based AM two⁃stage recursive identification methods,the key term separation based AM three⁃stage re⁃cursive identification methods,the bilinear⁃in⁃parameter model decomposition based AM stochastic gradient identifi⁃cation methods and the bilinear⁃in⁃parameter model decomposition based AM recursive least squares identification methods.Finally,the computational efficiency and the computational steps of several typical identification algorithms are discussed.The convergence of the proposed algorithms needs further study.

  6. Identification of multiple inputs single output errors-in-variables system using cumulant

    Institute of Scientific and Technical Information of China (English)

    Haihui Long; Jiankang Zhao

    2014-01-01

    A higher-order cumulant-based weighted least square (HOCWLS) and a higher-order cumulant-based iterative least square (HOCILS) are derived for multiple inputs single output (MISO) errors-in-variables (EIV) systems from noisy input/output data. Whether the noises of the input/output of the system are white or colored, the proposed algorithms can be insensitive to these noises and yield unbiased estimates. To realize adaptive pa-rameter estimates, a higher-order cumulant-based recursive least square (HOCRLS) method is also studied. Convergence analy-sis of the HOCRLS is conducted by using the stochastic process theory and the stochastic martingale theory. It indicates that the parameter estimation error of HOCRLS consistently converges to zero under a generalized persistent excitation condition. The use-fulness of the proposed algorithms is assessed through numerical simulations.

  7. Treatments of Precipitation Inputs to Hydrologic Models

    Science.gov (United States)

    Hydrological models are used to assess many water resources problems from agricultural use and water quality to engineering issues. The success of these models are dependent on correct parameterization; the most sensitive being the rainfall input time series. These records can come from land-based ...

  8. Correction of an input function for errors introduced with automated blood sampling

    Energy Technology Data Exchange (ETDEWEB)

    Schlyer, D.J.; Dewey, S.L. [Brookhaven National Lab., Upton, NY (United States)

    1994-05-01

    Accurate kinetic modeling of PET data requires an precise arterial plasma input function. The use of automated blood sampling machines has greatly improved the accuracy but errors can be introduced by the dispersion of the radiotracer in the sampling tubing. This dispersion results from three effects. The first is the spreading of the radiotracer in the tube due to mass transfer. The second is due to the mechanical action of the peristaltic pump and can be determined experimentally from the width of a step function. The third is the adsorption of the radiotracer on the walls of the tubing during transport through the tube. This is a more insidious effect since the amount recovered from the end of the tube can be significantly different than that introduced into the tubing. We have measured the simple mass transport using [{sup 18}F]fluoride in water which we have shown to be quantitatively recovered with no interaction with the tubing walls. We have also carried out experiments with several radiotracers including [{sup 18}F]Haloperidol, [{sup 11}C]L-deprenyl, [{sup 18}]N-methylspiroperidol ([{sup 18}F]NMS) and [{sup 11}C]buprenorphine. In all cases there was some retention of the radiotracer by untreated silicone tubing. The amount retained in the tubing ranged from 6% for L-deprenyl to 30% for NMS. The retention of the radiotracer was essentially eliminated after pretreatment with the relevant unlabeled compound. For example less am 2% of the [{sup 18}F]NMS was retained in tubing treated with unlabelled NMS. Similar results were obtained with baboon plasma although the amount retained in the untreated tubing was less in all cases. From these results it is possible to apply a mathematical correction to the measured input function to account for mechanical dispersion and to apply a chemical passivation to the tubing to reduce the dispersion due to adsorption of the radiotracer on the tubing walls.

  9. Error propagation in energetic carrying capacity models

    Science.gov (United States)

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  10. Sensitivity Analysis of the ALMANAC Model's Input Variables

    Institute of Scientific and Technical Information of China (English)

    XIE Yun; James R.Kiniry; Jimmy R.Williams; CHEN You-min; LIN Er-da

    2002-01-01

    Crop models often require extensive input data sets to realistically simulate crop growth. Development of such input data sets can be difficult for some model users. The objective of this study was to evaluate the importance of variables in input data sets for crop modeling. Based on published hybrid performance trials in eight Texas counties, we developed standard data sets of 10-year simulations of maize and sorghum for these eight counties with the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) model. The simulation results were close to the measured county yields with relative error only 2.6%for maize, and - 0.6% for sorghum. We then analyzed the sensitivity of grain yield to solar radiation, rainfall, soil depth, soil plant available water, and runoff curve number, comparing simulated yields to those with the original, standard data sets. Runoff curve number changes had the greatest impact on simulated maize and sorghum yields for all the counties. The next most critical input was rainfall, and then solar radiation for both maize and sorghum, especially for the dryland condition. For irrigated sorghum, solar radiation was the second most critical input instead of rainfall. The degree of sensitivity of yield to all variables for maize was larger than for sorghum except for solar radiation. Many models use a USDA curve number approach to represent soil water redistribution, so it will be important to have accurate curve numbers, rainfall, and soil depth to realistically simulate yields.

  11. Error rate information in attention allocation pilot models

    Science.gov (United States)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  12. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima

    2017-07-10

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.

  13. Error Propagation in a System Model

    Science.gov (United States)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  14. Model based optimization of EMC input filters

    Energy Technology Data Exchange (ETDEWEB)

    Raggl, K; Kolar, J. W. [Swiss Federal Institute of Technology, Power Electronic Systems Laboratory, Zuerich (Switzerland); Nussbaumer, T. [Levitronix GmbH, Zuerich (Switzerland)

    2008-07-01

    Input filters of power converters for compliance with regulatory electromagnetic compatibility (EMC) standards are often over-dimensioned in practice due to a non-optimal selection of number of filter stages and/or the lack of solid volumetric models of the inductor cores. This paper presents a systematic filter design approach based on a specific filter attenuation requirement and volumetric component parameters. It is shown that a minimal volume can be found for a certain optimal number of filter stages for both the differential mode (DM) and common mode (CM) filter. The considerations are carried out exemplarily for an EMC input filter of a single phase power converter for the power levels of 100 W, 300 W, and 500 W. (author)

  15. Approximate input physics for stellar modelling

    CERN Document Server

    Pols, O R; Eggleton, P P; Han, Z; Pols, O R; Tout, C A; Eggleton, P P; Han, Z

    1995-01-01

    We present a simple and efficient, yet reasonably accurate, equation of state, which at the moderately low temperatures and high densities found in the interiors of stars less massive than the Sun is substantially more accurate than its predecessor by Eggleton, Faulkner & Flannery. Along with the most recently available values in tabular form of opacities, neutrino loss rates, and nuclear reaction rates for a selection of the most important reactions, this provides a convenient package of input physics for stellar modelling. We briefly discuss a few results obtained with the updated stellar evolution code.

  16. Computer input devices: neutral party or source of significant error in manual lesion segmentation?

    Science.gov (United States)

    Chen, James Y; Seagull, F Jacob; Nagy, Paul; Lakhani, Paras; Melhem, Elias R; Siegel, Eliot L; Safdar, Nabile M

    2011-02-01

    Lesion segmentation involves outlining the contour of an abnormality on an image to distinguish boundaries between normal and abnormal tissue and is essential to track malignant and benign disease in medical imaging for clinical, research, and treatment purposes. A laser optical mouse and a graphics tablet were used by radiologists to segment 12 simulated reference lesions per subject in two groups (one group comprised three lesion morphologies in two sizes, one for each input device for each device two sets of six, composed of three morphologies in two sizes each). Time for segmentation was recorded. Subjects completed an opinion survey following segmentation. Error in contour segmentation was calculated using root mean square error. Error in area of segmentation was calculated compared to the reference lesion. 11 radiologists segmented a total of 132 simulated lesions. Overall error in contour segmentation was less with the graphics tablet than with the mouse (P segmentation was not significantly different between the tablet and the mouse (P = 0.62). Time for segmentation was less with the tablet than the mouse (P = 0.011). All subjects preferred the graphics tablet for future segmentation (P = 0.011) and felt subjectively that the tablet was faster, easier, and more accurate (P = 0.0005). For purposes in which accuracy in contour of lesion segmentation is of the greater importance, the graphics tablet is superior to the mouse in accuracy with a small speed benefit. For purposes in which accuracy of area of lesion segmentation is of greater importance, the graphics tablet and mouse are equally accurate.

  17. Model error estimation in ensemble data assimilation

    Directory of Open Access Journals (Sweden)

    S. Gillijns

    2007-01-01

    Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.

  18. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  19. Error handling strategies in multiphase inverse modeling

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  20. Error Estimates of Theoretical Models: a Guide

    CERN Document Server

    Dobaczewski, J; Reinhard, P -G

    2014-01-01

    This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

  1. Input modelling for subchannel analysis of CANFLEX fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Jun, Ji Su; Suk, Ho Chun [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-06-01

    This report describs the input modelling for subchannel analysis of CANFLEX fuel bundle using CASS(Candu thermalhydraulic Analysis by Subchannel approacheS) code which has been developed for subchannel analysis of CANDU fuel channel. CASS code can give the different calculation results according to users' input modelling. Hence, the objective of this report provide the background information of input modelling, the accuracy of input data and gives the confidence of calculation results. (author). 11 refs., 3 figs., 4 tabs.

  2. Error estimation and adaptive chemical transport modeling

    Directory of Open Access Journals (Sweden)

    Malte Braack

    2014-09-01

    Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.

  3. REFLECTIONS ON THE INOPERABILITY INPUT-OUTPUT MODEL

    NARCIS (Netherlands)

    Dietzenbacher, Erik; Miller, Ronald E.

    2015-01-01

    We argue that the inoperability input-output model is a straightforward - albeit potentially very relevant - application of the standard input-output model. In addition, we propose two less standard input-output approaches as alternatives to take into consideration when analyzing the effects of disa

  4. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Sensitivity of tissue properties derived from MRgFUS temperature data to input errors and data inclusion criteria: ex vivo study in porcine muscle

    Science.gov (United States)

    Shi, Y. C.; Parker, D. L.; Dillon, C. R.

    2016-08-01

    This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error  =  -2 to 2 mm) and time vectors (t error  =  -2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit  =  1-10 mm) and temporal (t fit  =  8.8-61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error  =  0 and estimate errors less than 10% when r error    2.5  ×  FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications.

  6. Improving the Performance of Water Demand Forecasting Models by Using Weather Input

    NARCIS (Netherlands)

    Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.

    2014-01-01

    Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an Adaptiv

  7. Improving the Performance of Water Demand Forecasting Models by Using Weather Input

    NARCIS (Netherlands)

    Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.

    2014-01-01

    Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an Adaptiv

  8. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  9. Towards a Bayesian total error analysis of conceptual rainfall-runoff models: Characterising model error using storm-dependent parameters

    Science.gov (United States)

    Kuczera, George; Kavetski, Dmitri; Franks, Stewart; Thyer, Mark

    2006-11-01

    SummaryCalibration and prediction in conceptual rainfall-runoff (CRR) modelling is affected by the uncertainty in the observed forcing/response data and the structural error in the model. This study works towards the goal of developing a robust framework for dealing with these sources of error and focuses on model error. The characterisation of model error in CRR modelling has been thwarted by the convenient but indefensible treatment of CRR models as deterministic descriptions of catchment dynamics. This paper argues that the fluxes in CRR models should be treated as stochastic quantities because their estimation involves spatial and temporal averaging. Acceptance that CRR models are intrinsically stochastic paves the way for a more rational characterisation of model error. The hypothesis advanced in this paper is that CRR model error can be characterised by storm-dependent random variation of one or more CRR model parameters. A simple sensitivity analysis is used to identify the parameters most likely to behave stochastically, with variation in these parameters yielding the largest changes in model predictions as measured by the Nash-Sutcliffe criterion. A Bayesian hierarchical model is then formulated to explicitly differentiate between forcing, response and model error. It provides a very general framework for calibration and prediction, as well as for testing hypotheses regarding model structure and data uncertainty. A case study calibrating a six-parameter CRR model to daily data from the Abercrombie catchment (Australia) demonstrates the considerable potential of this approach. Allowing storm-dependent variation in just two model parameters (with one of the parameters characterising model error and the other reflecting input uncertainty) yields a substantially improved model fit raising the Nash-Sutcliffe statistic from 0.74 to 0.94. Of particular significance is the use of posterior diagnostics to test the key assumptions about the data and model errors

  10. Analysis of modeling errors in system identification

    Science.gov (United States)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    This paper is concerned with the identification of a system in the presence of several error sources. Following some basic definitions, the notion of 'near-equivalence in probability' is introduced using the concept of near-equivalence between a model and process. Necessary and sufficient conditions for the identifiability of system parameters are given. The effect of structural error on the parameter estimates for both deterministic and stochastic cases are considered.

  11. Generalization error bounds for stationary autoregressive models

    CERN Document Server

    McDonald, Daniel J; Schervish, Mark

    2011-01-01

    We derive generalization error bounds for stationary univariate autoregressive (AR) models. We show that the stationarity assumption alone lets us treat the estimation of AR models as a regularized kernel regression without the need to further regularize the model arbitrarily. We thereby bound the Rademacher complexity of AR models and apply existing Rademacher complexity results to characterize the predictive risk of AR models. We demonstrate our methods by predicting interest rate movements.

  12. Spatial Error Metrics for Oceanographic Model Verification

    Science.gov (United States)

    2012-02-01

    quantitatively and qualitatively for this oceano - graphic data and successfully separates the model error into displacement and intensity components. This... oceano - graphic models as well, though one would likely need to make special modifications to handle the often-used nonuniform spacing between depth layers

  13. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  14. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  15. SIMPLE MODEL FOR THE INPUT IMPEDANCE OF RECTANGULAR MICROSTRIP ANTENNA

    Directory of Open Access Journals (Sweden)

    Celal YILDIZ

    1998-03-01

    Full Text Available A very simple model for the input impedance of a coax-fed rectangular microstrip patch antenna is presented. It is based on the cavity model and the equivalent resonant circuits. The theoretical input impedance results obtained from this model are in good agreement with the experimental results available in the literature. This model is well suited for computer-aided design (CAD.

  16. A probabilistic model for reducing medication errors.

    Directory of Open Access Journals (Sweden)

    Phung Anh Nguyen

    Full Text Available BACKGROUND: Medication errors are common, life threatening, costly but preventable. Information technology and automated systems are highly efficient for preventing medication errors and therefore widely employed in hospital settings. The aim of this study was to construct a probabilistic model that can reduce medication errors by identifying uncommon or rare associations between medications and diseases. METHODS AND FINDINGS: Association rules of mining techniques are utilized for 103.5 million prescriptions from Taiwan's National Health Insurance database. The dataset included 204.5 million diagnoses with ICD9-CM codes and 347.7 million medications by using ATC codes. Disease-Medication (DM and Medication-Medication (MM associations were computed by their co-occurrence and associations' strength were measured by the interestingness or lift values which were being referred as Q values. The DMQs and MMQs were used to develop the AOP model to predict the appropriateness of a given prescription. Validation of this model was done by comparing the results of evaluation performed by the AOP model and verified by human experts. The results showed 96% accuracy for appropriate and 45% accuracy for inappropriate prescriptions, with a sensitivity and specificity of 75.9% and 89.5%, respectively. CONCLUSIONS: We successfully developed the AOP model as an efficient tool for automatic identification of uncommon or rare associations between disease-medication and medication-medication in prescriptions. The AOP model helps to reduce medication errors by alerting physicians, improving the patients' safety and the overall quality of care.

  17. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  18. Storm-impact scenario XBeach model inputs and tesults

    Science.gov (United States)

    Mickey, Rangley; Long, Joseph W.; Thompson, David M.; Plant, Nathaniel G.; Dalyander, P. Soupy

    2017-01-01

    The XBeach model input and output of topography and bathymetry resulting from simulation of storm-impact scenarios at the Chandeleur Islands, LA, as described in USGS Open-File Report 2017–1009 (https://doi.org/10.3133/ofr20171009), are provided here. For further information regarding model input generation and visualization of model output topography and bathymetry refer to USGS Open-File Report 2017–1009 (https://doi.org/10.3133/ofr20171009).

  19. Regression Model With Elliptically Contoured Errors

    CERN Document Server

    Arashi, M; Tabatabaey, S M M

    2012-01-01

    For the regression model where the errors follow the elliptically contoured distribution (ECD), we consider the least squares (LS), restricted LS (RLS), preliminary test (PT), Stein-type shrinkage (S) and positive-rule shrinkage (PRS) estimators for the regression parameters. We compare the quadratic risks of the estimators to determine the relative dominance properties of the five estimators.

  20. Understanding error generation in fused deposition modeling

    Science.gov (United States)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  1. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    Science.gov (United States)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  2. Computer Input Devices: Neutral Party or Source of Significant Error in Manual Lesion Segmentation?

    OpenAIRE

    Chen, James Y.; Seagull, F. Jacob; Nagy, Paul; Lakhani, Paras; Melhem, Elias R.; Siegel, Eliot L.; Safdar, Nabile M.

    2010-01-01

    Lesion segmentation involves outlining the contour of an abnormality on an image to distinguish boundaries between normal and abnormal tissue and is essential to track malignant and benign disease in medical imaging for clinical, research, and treatment purposes. A laser optical mouse and a graphics tablet were used by radiologists to segment 12 simulated reference lesions per subject in two groups (one group comprised three lesion morphologies in two sizes, one for each input device for each...

  3. Preisach models of hysteresis driven by Markovian input processes

    Science.gov (United States)

    Schubert, Sven; Radons, Günter

    2017-08-01

    We study the response of Preisach models of hysteresis to stochastically fluctuating external fields. We perform numerical simulations, which indicate that analytical expressions derived previously for the autocorrelation functions and power spectral densities of the Preisach model with uncorrelated input, hold asymptotically also if the external field shows exponentially decaying correlations. As a consequence, the mechanisms causing long-term memory and 1 /f noise in Preisach models with uncorrelated inputs still apply in the presence of fast decaying input correlations. We collect additional evidence for the importance of the effective Preisach density previously introduced even for Preisach models with correlated inputs. Additionally, we present some results for the output of the Preisach model with uncorrelated input using analytical methods. It is found, for instance, that in order to produce the same long-time tails in the output, the elementary hysteresis loops of large width need to have a higher weight for the generic Preisach model than for the symmetric Preisach model. Further, we find autocorrelation functions and power spectral densities to be monotonically decreasing independently of the choice of input and Preisach density.

  4. Model-Free importance indicators for dependent input

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, A.; Ratto, M.; Tarantola, S

    2001-07-01

    A number of methods are available to asses uncertainty importance in the predictions of a simulation model for orthogonal sets of uncertain input factors. However, in many practical cases input factors are correlated. Even for these cases it is still possible to compute the correlation ratio and the partial (or incremental) importance measure, two popular sensitivity measures proposed in the recent literature on the subject. Unfortunately, the existing indicators of importance have limitations in terms of their use in sensitivity analysis of model output. Correlation ratios are indeed effective for priority setting (i.e. to find out what input factor needs better determination) but not, for instance, for the identification of the subset of the most important input factors, or for model simplification. In such cases other types of indicators are required that can cope with the simultaneous occurrence of correlation and interaction (a property of the model) among the input factors. In (1) the limitations of current measures of importance were discussed and a general approach was identified to quantify uncertainty importance for correlated inputs in terms of different betting contexts. This work was later submitted to the Journal of the American Statistical Association. However, the computational cost of such approach is still high, as it happens when dealing with correlated input factors. In this paper we explore how suitable designs could reduce the numerical load of the analysis. (Author) 11 refs.

  5. Hierarchical Boltzmann simulations and model error estimation

    Science.gov (United States)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  6. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... of a households face. In this case an important policy parameter is the effect of income (reflecting the household budget) on the choice of travel mode. This paper deals with the consequences of measurement error in income (an explanatory variable) in discrete choice models. Since it is likely to give misleading...

  7. Space market model space industry input-output model

    Science.gov (United States)

    Hodgin, Robert F.; Marchesini, Roberto

    1987-01-01

    The goal of the Space Market Model (SMM) is to develop an information resource for the space industry. The SMM is intended to contain information appropriate for decision making in the space industry. The objectives of the SMM are to: (1) assemble information related to the development of the space business; (2) construct an adequate description of the emerging space market; (3) disseminate the information on the space market to forecasts and planners in government agencies and private corporations; and (4) provide timely analyses and forecasts of critical elements of the space market. An Input-Output model of market activity is proposed which are capable of transforming raw data into useful information for decision makers and policy makers dealing with the space sector.

  8. A Probabilistic Model for Reducing Medication Errors

    Science.gov (United States)

    Nguyen, Phung Anh; Syed-Abdul, Shabbir; Iqbal, Usman; Hsu, Min-Huei; Huang, Chen-Ling; Li, Hsien-Chang; Clinciu, Daniel Livius; Jian, Wen-Shan; Li, Yu-Chuan Jack

    2013-01-01

    Background Medication errors are common, life threatening, costly but preventable. Information technology and automated systems are highly efficient for preventing medication errors and therefore widely employed in hospital settings. The aim of this study was to construct a probabilistic model that can reduce medication errors by identifying uncommon or rare associations between medications and diseases. Methods and Finding(s) Association rules of mining techniques are utilized for 103.5 million prescriptions from Taiwan’s National Health Insurance database. The dataset included 204.5 million diagnoses with ICD9-CM codes and 347.7 million medications by using ATC codes. Disease-Medication (DM) and Medication-Medication (MM) associations were computed by their co-occurrence and associations’ strength were measured by the interestingness or lift values which were being referred as Q values. The DMQs and MMQs were used to develop the AOP model to predict the appropriateness of a given prescription. Validation of this model was done by comparing the results of evaluation performed by the AOP model and verified by human experts. The results showed 96% accuracy for appropriate and 45% accuracy for inappropriate prescriptions, with a sensitivity and specificity of 75.9% and 89.5%, respectively. Conclusions We successfully developed the AOP model as an efficient tool for automatic identification of uncommon or rare associations between disease-medication and medication-medication in prescriptions. The AOP model helps to reduce medication errors by alerting physicians, improving the patients’ safety and the overall quality of care. PMID:24312659

  9. A Probabilistic Collocation Method Based Statistical Gate Delay Model Considering Process Variations and Multiple Input Switching

    CERN Document Server

    Kumar, Y Satish; Talarico, Claudio; Wang, Janet; 10.1109/DATE.2005.31

    2011-01-01

    Since the advent of new nanotechnologies, the variability of gate delay due to process variations has become a major concern. This paper proposes a new gate delay model that includes impact from both process variations and multiple input switching. The proposed model uses orthogonal polynomial based probabilistic collocation method to construct a delay analytical equation from circuit timing performance. From the experimental results, our approach has less that 0.2% error on the mean delay of gates and less than 3% error on the standard deviation.

  10. Biomedical model fitting and error analysis.

    Science.gov (United States)

    Costa, Kevin D; Kleinstein, Steven H; Hershberg, Uri

    2011-09-20

    This Teaching Resource introduces students to curve fitting and error analysis; it is the second of two lectures on developing mathematical models of biomedical systems. The first focused on identifying, extracting, and converting required constants--such as kinetic rate constants--from experimental literature. To understand how such constants are determined from experimental data, this lecture introduces the principles and practice of fitting a mathematical model to a series of measurements. We emphasize using nonlinear models for fitting nonlinear data, avoiding problems associated with linearization schemes that can distort and misrepresent the data. To help ensure proper interpretation of model parameters estimated by inverse modeling, we describe a rigorous six-step process: (i) selecting an appropriate mathematical model; (ii) defining a "figure-of-merit" function that quantifies the error between the model and data; (iii) adjusting model parameters to get a "best fit" to the data; (iv) examining the "goodness of fit" to the data; (v) determining whether a much better fit is possible; and (vi) evaluating the accuracy of the best-fit parameter values. Implementation of the computational methods is based on MATLAB, with example programs provided that can be modified for particular applications. The problem set allows students to use these programs to develop practical experience with the inverse-modeling process in the context of determining the rates of cell proliferation and death for B lymphocytes using data from BrdU-labeling experiments.

  11. An Investigation of the Incidence and Effect of Spreadsheet Errors Caused by the Hard Coding of Input Data Values into Formulas

    CERN Document Server

    Blayney, Paul J

    2008-01-01

    The hard coding of input data or constants into spreadsheet formulas is widely recognised as poor spreadsheet model design. However, the importance of avoiding such practice appears to be underestimated perhaps in light of the lack of quantitative error at the time of occurrence and the recognition that this design defect may never result in a bottom-line error. The paper examines both the academic and practitioner view of such hard coding design flaws. The practitioner or industry viewpoint is gained indirectly through a review of commercial spreadsheet auditing software. The development of an automated (electronic) means for detecting such hard coding is described together with a discussion of some results obtained through analysis of a number of student and practitioner spreadsheet models.

  12. Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.

  13. An Integrated Hydrologic Bayesian Multi-Model Combination Framework: Confronting Input, parameter and model structural uncertainty in Hydrologic Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Sorooshian, S

    2006-05-05

    This paper presents a new technique--Integrated Bayesian Uncertainty Estimator (IBUNE) to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly. The uncertainties from the input (forcing) data--mainly the precipitation observations and from the model parameters are reduced through a Monte Carlo Markov Chain (MCMC) scheme named Shuffled Complex Evolution Metropolis (SCEM) algorithm which has been extended to include a precipitation error model. Afterwards, the Bayesian Model Averaging (BMA) scheme is employed to further improve the prediction skill and uncertainty estimation using multiple model output. A series of case studies using three rainfall-runoff models to predict the streamflow in the Leaf River basin, Mississippi are used to examine the necessity and usefulness of this technique. The results suggests that ignoring either input forcings error or model structural uncertainty will lead to unrealistic model simulations and their associated uncertainty bounds which does not consistently capture and represent the real-world behavior of the watershed.

  14. Estimation of the input parameters in the Feller neuronal model

    Science.gov (United States)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  15. Quality assurance of weather data for agricultural system model input

    Science.gov (United States)

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  16. Optimization of precipitation inputs for SWAT modeling in mountainous catchment

    Science.gov (United States)

    Tuo, Ye; Chiogna, Gabriele; Disse, Markus

    2016-04-01

    Precipitation is often the most important input data in hydrological models when simulating streamflow in mountainous catchment. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauging station which is nearest to the centroid of each subcatchment, eventually corrected using the band elevation method. This leads in general to inaccurate subcatchment precipitation representation, which results in unreliable simulation results in mountainous catchment. To investigate the impact of the precipitation inputs and consider the high spatial and temporal variability of precipitation, we first interpolated 21 years (1990-2010) of daily measured data using the Inverse Distance Weighting (IDW) method. Averaged IDW daily values have been calculated at the subcatchment scale to be further supplied as optimized precipitation inputs for SWAT. Both datasets (Measured data and IDW data) are applied to three Alpine subcatchments of the Adige catchment (North-eastern Italy, 12100 km2) as precipitation inputs. Based on the calibration and validation results, model performances are evaluated according to the Nash Sutchliffe Efficiency (NSE) and Coefficient of Determination (R2). For all three subcatchments, the simulation results with IDW inputs are better than the original method which uses measured inputs from the nearest station. This suggests that IDW method could improve the model performance in Alpine catchments to some extent. By taking into account and weighting the distance between precipitation records, IDW supplies more accurate precipitation inputs for each individual Alpine subcatchment, which would as a whole lead to an improved description of the hydrological behavior of the entire Adige catchment.

  17. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  18. Computation of reduced energy input current stimuli for neuron phase models.

    Science.gov (United States)

    Anyalebechi, Jason; Koelling, Melinda E; Miller, Damon A

    2014-01-01

    A regularly spiking neuron can be studied using a phase model. The effect of an input stimulus current on the phase time derivative is captured by a phase response curve. This paper adapts a technique that was previously applied to conductance-based models to discover optimal input stimulus currents for phase models. First, the neuron phase response θ(t) due to an input stimulus current i(t) is computed using a phase model. The resulting θ(t) is taken to be a reference phase r(t). Second, an optimal input stimulus current i(*)(t) is computed to minimize a weighted sum of the square-integral `energy' of i(*)(t) and the tracking error between the reference phase r(t) and the phase response due to i(*)(t). The balance between the conflicting requirements of energy and tracking error minimization is controlled by a single parameter. The generated optimal current i(*)t) is then compared to the input current i(t) which was used to generate the reference phase r(t). This technique was applied to two neuron phase models; in each case, the current i(*)(t) generates a phase response similar to the reference phase r(t), and the optimal current i(*)(t) has a lower `energy' than the square-integral of i(t). For constant i(t), the optimal current i(*)(t) need not be constant in time. In fact, i(*)(t) is large (possibly even larger than i(t)) for regions where the phase response curve indicates a stronger sensitivity to the input stimulus current, and smaller in regions of reduced sensitivity.

  19. The Dynamic Modeling of Multiple Pairs of Spur Gears in Mesh, Including Friction and Geometrical Errors

    Directory of Open Access Journals (Sweden)

    Shengxiang Jia

    2003-01-01

    Full Text Available This article presents a dynamic model of three shafts and two pair of gears in mesh, with 26 degrees of freedom, including the effects of variable tooth stiffness, pitch and profile errors, friction, and a localized tooth crack on one of the gears. The article also details howgeometrical errors in teeth can be included in a model. The model incorporates the effects of variations in torsional mesh stiffness in gear teeth by using a common formula to describe stiffness that occurs as the gears mesh together. The comparison between the presence and absence of geometrical errors in teeth was made by using Matlab and Simulink models, which were developed from the equations of motion. The effects of pitch and profile errors on the resultant input pinion angular velocity coherent-signal of the input pinion's average are discussed by investigating some of the common diagnostic functions and changes to the frequency spectra results.

  20. Hybrid Models for Trajectory Error Modelling in Urban Environments

    Science.gov (United States)

    Angelatsa, E.; Parés, M. E.; Colomina, I.

    2016-06-01

    This paper tackles the first step of any strategy aiming to improve the trajectory of terrestrial mobile mapping systems in urban environments. We present an approach to model the error of terrestrial mobile mapping trajectories, combining deterministic and stochastic models. Due to urban specific environment, the deterministic component will be modelled with non-continuous functions composed by linear shifts, drifts or polynomial functions. In addition, we will introduce a stochastic error component for modelling residual noise of the trajectory error function. First step for error modelling requires to know the actual trajectory error values for several representative environments. In order to determine as accurately as possible the trajectories error, (almost) error less trajectories should be estimated using extracted nonsemantic features from a sequence of images collected with the terrestrial mobile mapping system and from a full set of ground control points. Once the references are estimated, they will be used to determine the actual errors in terrestrial mobile mapping trajectory. The rigorous analysis of these data sets will allow us to characterize the errors of a terrestrial mobile mapping system for a wide range of environments. This information will be of great use in future campaigns to improve the results of the 3D points cloud generation. The proposed approach has been evaluated using real data. The data originate from a mobile mapping campaign over an urban and controlled area of Dortmund (Germany), with harmful GNSS conditions. The mobile mapping system, that includes two laser scanner and two cameras, was mounted on a van and it was driven over a controlled area around three hours. The results show the suitability to decompose trajectory error with non-continuous deterministic and stochastic components.

  1. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  2. The use of synthetic input sequences in time series modeling

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Dair Jose de [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Letellier, Christophe [CORIA/CNRS UMR 6614, Universite et INSA de Rouen, Av. de l' Universite, BP 12, F-76801 Saint-Etienne du Rouvray cedex (France); Gomes, Murilo E.D. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Aguirre, Luis A. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil)], E-mail: aguirre@cpdee.ufmg.br

    2008-08-04

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.

  3. The use of synthetic input sequences in time series modeling

    Science.gov (United States)

    de Oliveira, Dair José; Letellier, Christophe; Gomes, Murilo E. D.; Aguirre, Luis A.

    2008-08-01

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.

  4. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  5. Modeling human response errors in synthetic flight simulator domain

    Science.gov (United States)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  6. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  7. System modeling based measurement error analysis of digital sun sensors

    Institute of Scientific and Technical Information of China (English)

    WEI; M; insong; XING; Fei; WANG; Geng; YOU; Zheng

    2015-01-01

    Stringent attitude determination accuracy is required for the development of the advanced space technologies and thus the accuracy improvement of digital sun sensors is necessary.In this paper,we presented a proposal for measurement error analysis of a digital sun sensor.A system modeling including three different error sources was built and employed for system error analysis.Numerical simulations were also conducted to study the measurement error introduced by different sources of error.Based on our model and study,the system errors from different error sources are coupled and the system calibration should be elaborately designed to realize a digital sun sensor with extra-high accuracy.

  8. Numerical study of an error model for a strap-down INS

    Science.gov (United States)

    Grigorie, T. L.; Sandu, D. G.; Corcau, C. L.

    2016-10-01

    The paper presents a numerical study related to a mathematical error model developed for a strap-down inertial navigation system. The study aims to validate the error model by using some Matlab/Simulink software models implementing the inertial navigator and the error model mathematics. To generate the inputs in the evaluation Matlab/Simulink software some inertial sensors software models are used. The sensors models were developed based on the IEEE equivalent models for the inertial sensorsand on the analysis of the data sheets related to real inertial sensors. In the paper are successively exposed the inertial navigation equations (attitude, position and speed), the mathematics of the inertial navigator error model, the software implementations and the numerical evaluation results.

  9. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  10. A statistical model for point-based target registration error with anisotropic fiducial localizer error.

    Science.gov (United States)

    Wiles, Andrew D; Likholyot, Alexander; Frantz, Donald D; Peters, Terry M

    2008-03-01

    Error models associated with point-based medical image registration problems were first introduced in the late 1990s. The concepts of fiducial localizer error, fiducial registration error, and target registration error are commonly used in the literature. The model for estimating the target registration error at a position r in a coordinate frame defined by a set of fiducial markers rigidly fixed relative to one another is ubiquitous in the medical imaging literature. The model has also been extended to simulate the target registration error at the point of interest in optically tracked tools. However, the model is limited to describing the error in situations where the fiducial localizer error is assumed to have an isotropic normal distribution in R3. In this work, the model is generalized to include a fiducial localizer error that has an anisotropic normal distribution. Similar to the previous models, the root mean square statistic rms tre is provided along with an extension that provides the covariance Sigma tre. The new model is verified using a Monte Carlo simulation and a set of statistical hypothesis tests. Finally, the differences between the two assumptions, isotropic and anisotropic, are discussed within the context of their use in 1) optical tool tracking simulation and 2) image registration.

  11. How well can we forecast future model error and uncertainty by mining past model performance data

    Science.gov (United States)

    Solomatine, Dimitri

    2016-04-01

    Consider a hydrological model Y(t) = M(X(t), P), where X=vector of inputs; P=vector of parameters; Y=model output (typically flow); t=time. In cases when there is enough past data on the model M performance, it is possible to use this data to build a (data-driven) model EC of model M error. This model EC will be able to forecast error E when a new input X is fed into model M; then subtracting E from the model prediction Y a better estimate of Y can be obtained. Model EC is usually called the error corrector (in meteorology - a bias corrector). However, we may go further in characterizing model deficiencies, and instead of using the error (a real value) we may consider a more sophisticated characterization, namely a probabilistic one. So instead of rather a model EC of the model M error it is also possible to build a model U of model M uncertainty; if uncertainty is described as the model error distribution D this model will calculate its properties - mean, variance, other moments, and quantiles. The general form of this model could be: D = U (RV), where RV=vector of relevant variables having influence on model uncertainty (to be identified e.g. by mutual information analysis); D=vector of variables characterizing the error distribution (typically, two or more quantiles). There is one aspect which is not always explicitly mentioned in uncertainty analysis work. In our view it is important to distinguish the following main types of model uncertainty: 1. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. its uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. Here the following methods can be mentioned: (a) quantile regression (QR

  12. Cognitive modelling of pilot errors and error recovery in flight management tasks

    NARCIS (Netherlands)

    Lüdtke, A.; Osterloh, J.P.; Mioch, T.; Rister, F.; Looije, R.

    2009-01-01

    This paper presents a cognitive modelling approach to predict pilot errors and error recovery during the interaction with aircraft cockpit systems. The model allows execution of flight procedures in a virtual simulation environment and production of simulation traces. We present traces for the inter

  13. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  14. Predictive vegetation modeling for conservation: impact of error propagation from digital elevation data.

    Science.gov (United States)

    Van Niel, Kimberly P; Austin, Mike P

    2007-01-01

    The effect of digital elevation model (DEM) error on environmental variables, and subsequently on predictive habitat models, has not been explored. Based on an error analysis of a DEM, multiple error realizations of the DEM were created and used to develop both direct and indirect environmental variables for input to predictive habitat models. The study explores the effects of DEM error and the resultant uncertainty of results on typical steps in the modeling procedure for prediction of vegetation species presence/absence. Results indicate that all of these steps and results, including the statistical significance of environmental variables, shapes of species response curves in generalized additive models (GAMs), stepwise model selection, coefficients and standard errors for generalized linear models (GLMs), prediction accuracy (Cohen's kappa and AUC), and spatial extent of predictions, were greatly affected by this type of error. Error in the DEM can affect the reliability of interpretations of model results and level of accuracy in predictions, as well as the spatial extent of the predictions. We suggest that the sensitivity of DEM-derived environmental variables to error in the DEM should be considered before including them in the modeling processes.

  15. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  16. Role of Forcing Uncertainty and Background Model Error Characterization in Snow Data Assimilation

    Science.gov (United States)

    Kumar, Sujay V.; Dong, Jiarul; Peters-Lidard, Christa D.; Mocko, David; Gomez, Breogan

    2017-01-01

    Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology) provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 (Advanced Microwave Scanning Radiometer 2) instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.

  17. Role of forcing uncertainty and background model error characterization in snow data assimilation

    Directory of Open Access Journals (Sweden)

    S. V. Kumar

    2017-06-01

    Full Text Available Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.

  18. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  19. Determining avalanche modelling input parameters using terrestrial laser scanning technology

    OpenAIRE

    2013-01-01

    International audience; In dynamic avalanche modelling, data about the volumes and areas of the snow released, mobilized and deposited are key input parameters, as well as the fracture height. The fracture height can sometimes be measured in the field, but it is often difficult to access the starting zone due to difficult or dangerous terrain and avalanche hazards. More complex is determining the areas and volumes of snow involved in an avalanche. Such calculations require high-resolution spa...

  20. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    Science.gov (United States)

    2013-08-01

    Louisiana Coastal Area Ecosystem Restoration Projects Study , Vol. 3, Final integrated ERDC/CHL CHETN-VI-44 August 2013 24 feasibility study and... Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area (LCA) Comprehensive...to Input Parameters by Ty V. Wamsley PURPOSE: The purpose of this Coastal and Hydraulics Engineering Technical Note (CHETN) is to document a

  1. Influence of magnetospheric inputs definition on modeling of ionospheric storms

    Science.gov (United States)

    Tashchilin, A. V.; Romanova, E. B.; Kurkin, V. I.

    Usually for numerical modeling of ionospheric storms corresponding empirical models specify parameters of neutral atmosphere and magnetosphere. Statistical kind of these models renders them impractical for simulation of the individual storm. Therefore one has to correct the empirical models using various additional speculations. The influence of magnetospheric inputs such as distributions of electric potential, number and energy fluxes of the precipitating electrons on the results of the ionospheric storm simulations has been investigated in this work. With this aim for the strong geomagnetic storm on September 25, 1998 hour global distributions of those magnetospheric inputs from 20 to 27 September were calculated by the magnetogram inversion technique (MIT). Then with the help of 3-D ionospheric model two variants of ionospheric response to this magnetic storm were simulated using MIT data and empirical models of the electric fields (Sojka et al., 1986) and electron precipitations (Hardy et al., 1985). The comparison of the received results showed that for high-latitude and subauroral stations the daily variations of electron density calculated with MIT data are more close to observations than those of empirical models. In addition using of the MIT data allows revealing some peculiarities in the daily variations of electron density during strong geomagnetic storm. References Sojka J.J., Rasmussen C.E., Schunk R.W. J.Geophys.Res., 1986, N10, p.11281. Hardy D.A., Gussenhoven M.S., Holeman E.A. J.Geophys.Res., 1985, N5, p.4229.

  2. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  3. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  4. Performance Assessment of Hydrological Models Considering Acceptable Forecast Error Threshold

    Directory of Open Access Journals (Sweden)

    Qianjin Dong

    2015-11-01

    Full Text Available It is essential to consider the acceptable threshold in the assessment of a hydrological model because of the scarcity of research in the hydrology community and errors do not necessarily cause risk. Two forecast errors, including rainfall forecast error and peak flood forecast error, have been studied based on the reliability theory. The first order second moment (FOSM and bound methods are used to identify the reliability. Through the case study of the Dahuofang (DHF Reservoir, it is shown that the correlation between these two errors has great influence on the reliability index of hydrological model. In particular, the reliability index of the DHF hydrological model decreases with the increasing correlation. Based on the reliability theory, the proposed performance evaluation framework incorporating the acceptable forecast error threshold and correlation among the multiple errors can be used to evaluate the performance of a hydrological model and to quantify the uncertainties of a hydrological model output.

  5. Probe Error Modeling Research Based on Bayesian Network

    Institute of Scientific and Technical Information of China (English)

    Wu Huaiqiang; Xing Zilong; Zhang Jian; Yan Yan

    2015-01-01

    Probe calibration is carried out under specific conditions; most of the error caused by the change of speed parameter has not been corrected. In order to reduce the measuring error influence on measurement accuracy, this article analyzes the relationship between speed parameter and probe error, and use Bayesian network to establish the model of probe error. Model takes account of prior knowledge and sample data, with the updating of data, which can reflect the change of the errors of the probe and constantly revised modeling results.

  6. Sensitivity analysis of a sound absorption model with correlated inputs

    Science.gov (United States)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  7. Deterministic treatment of model error in geophysical data assimilation

    CERN Document Server

    Carrassi, Alberto

    2015-01-01

    This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...

  8. Assessing and propagating uncertainty in model inputs in corsim

    Energy Technology Data Exchange (ETDEWEB)

    Molina, G.; Bayarri, M. J.; Berger, J. O.

    2001-07-01

    CORSIM is a large simulator for vehicular traffic, and is being studied with respect to its ability to successfully model and predict behavior of traffic in a 36 block section of Chicago. Inputs to the simulator include information about street configuration, driver behavior, traffic light timing, turning probabilities at each corner and distributions of traffic ingress into the system. This work is described in more detail in the article Fast Simulators for Assessment and Propagation of Model Uncertainty also in these proceedings. The focus of this conference poster is on the computational aspects of this problem. In particular, we address the description of the full conditional distributions needed for implementation of the MCMC algorithm and, in particular, how the constraints can be incorporated; details concerning the run time and convergence of the MCMC algorithm; and utilisation of the MCMC output for prediction and uncertainty analysis concerning the CORSIM computer model. As this last is the ultimate goal, it is worth emphasizing that the incorporation of all uncertainty concerning inputs can significantly affect the model predictions. (Author)

  9. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  10. An error assessment of the kriging based approximation model using a mean square error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Byeong Hyeon; Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)

    2006-08-15

    A Kriging model is a sort of approximation model and used as a deterministic model of a computationally expensive analysis or simulation. Although it has various advantages, it is difficult to assess the accuracy of the approximated model. It is generally known that a Mean Square Error (MSE) obtained from the kriging model can't calculate statistically exact error bounds contrary to a response surface method, and a cross validation is mainly used. But the cross validation also has many uncertainties. Moreover, the cross validation can't be used when a maximum error is required in the given region. For solving this problem, we first proposed a modified mean square error which can consider relative errors. Using the modified mean square error, we developed the strategy of adding a new sample to the place that the MSE has the maximum when the MSE is used for the assessment of the kriging model. Finally, we offer guidelines for the use of the MSE which is obtained from the kriging model. Four test problems show that the proposed strategy is a proper method which can assess the accuracy of the kriging model. Based on the results of four test problems, a convergence coefficient of 0.01 is recommended for an exact function approximation.

  11. Error Model of Curves in GIS and Digitization Experiment

    Institute of Scientific and Technical Information of China (English)

    GUO Tongde; WANG Jiayao; WANG Guangxia

    2006-01-01

    A stochastic error process of curves is proposed as the error model to describe the errors of curves in GIS. In terms of the stochastic process, four characteristics concerning the local error of curves, namely, mean error function, standard error function, absolute error function, and the correlation function of errors , are put forward. The total error of a curve is expressed by a mean square integral of the stochastic error process. The probabilistic meanings and geometric meanings of the characteristics mentioned above are also discussed. A scan digitization experiment is designed to check the efficiency of the model. In the experiment, a piece of contour line is digitized for more than 100 times and lots of sample functions are derived from the experiment. Finally, all the error characteristics are estimated on the basis of sample functions. The experiment results show that the systematic error in digitized map data is not negligible, and the errors of points on curves are chiefly dependent on the curvature and the concavity of the curves.

  12. Evaluating the uncertainty of input quantities in measurement models

    Science.gov (United States)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  13. Kernel Principal Component Analysis for Stochastic Input Model Generation (PREPRINT)

    Science.gov (United States)

    2010-08-17

    c ( )d Fig. 13. Contour of saturation at 0.2 PVI : MC mean (a) and variance (b) from experimental samples; MC mean (c) and variance (d) from PC...realizations. The contour plots of saturation at 0.2 PVI are given in Fig. 13. PVI represents dimensionless time and is computed as PVI = ∫ Q dt/Vp...stochastic input model provides a fast way to generate many realizations, which are consistent, in a useful sense, with the experimental data. PVI M ea n

  14. Performance Comparison of Sub Phonetic Model with Input Signal Processing

    Directory of Open Access Journals (Sweden)

    Dr E. Ramaraj

    2006-01-01

    Full Text Available The quest to arrive at a better model for signal transformation for speech has resulted in striving to develop better signal representations and algorithm. The article explores the word model which is a concatenation of state dependent senones as an alternate for phoneme. The Research Work has an objective of involving the senone with the Input signal processing an algorithm which has been tried with phoneme and has been quite successful and try to compare the performance of senone with ISP and Phoneme with ISP and supply the result analysis. The research model has taken the SPHINX IV[4] speech engine for its implementation owing to its flexibility to the new algorithm, robustness and performance consideration.

  15. Performance analysis of FXLMS algorithm with secondary path modeling error

    Institute of Scientific and Technical Information of China (English)

    SUN Xu; CHEN Duanshi

    2003-01-01

    Performance analysis of filtered-X LMS (FXLMS) algorithm with secondary path modeling error is carried out in both time and frequency domain. It is shown firstly that the effects of secondary path modeling error on the performance of FXLMS algorithm are determined by the distribution of the relative error of secondary path model along with frequency.In case of that the distribution of relative error is uniform the modeling error of secondary path will have no effects on the performance of the algorithm. In addition, a limitation property of FXLMS algorithm is proved, which implies that the negative effects of secondary path modeling error can be compensated by increasing the adaptive filter length. At last, some insights into the "spillover" phenomenon of FXLMS algorithm are given.

  16. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  17. A novel data-driven approach to model error estimation in Data Assimilation

    Science.gov (United States)

    Pathiraja, Sahani; Moradkhani, Hamid; Marshall, Lucy; Sharma, Ashish

    2016-04-01

    Error characterisation is a fundamental component of Data Assimilation (DA) studies. Effectively describing model error statistics has been a challenging area, with many traditional methods requiring some level of subjectivity (for instance in defining the error covariance structure). Recent advances have focused on removing the need for tuning of error parameters, although there are still some outstanding issues. Many methods focus only on the first and second moments, and rely on assuming multivariate Gaussian statistics. We propose a non-parametric, data-driven framework to estimate the full distributional form of model error, ie. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered, without needing to assign error characteristics/devise stochastic perturbations for individual components of model uncertainty (eg. input, parameter and structural). A training period is used to derive the error distribution of observed variables, conditioned on (potentially hidden) states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The framework is discussed in detail, and an application to a hydrologic case study with hidden states for one-day ahead streamflow prediction is presented. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard tuning approach.

  18. Modeling And Analysis Of The Surface Roughness And Geometrical Error Using Taguchi And Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    DR.S.C.JAYSWAL

    2011-07-01

    Full Text Available This experimental work presents a technique to determine the better surface quality by controlling the surface roughness and geometrical error. In machining operations, achieving desired surface quality features of the machined product is really a challenging job. Because, these quality features are highly correlated and areexpected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects. Thus The four input process parameters such as spindle speed, depth of cut, feed rate, and stepover have been selected to minimize the surface roughness and geometrical error simultaneously by using the robustdesign concept of Taguchi L9(34 method coupled with Response surface concept. Mathematical models for surface roughness and geometrical error were obtained from response surface analysis to predict values of surface roughness and geometrical error. S/N ratio and ANOVA analyses were also performed to obtain for significant parameters influencing surface roughness and geometrical error.

  19. The effect of model errors in variational assimilation

    Science.gov (United States)

    Wergen, Werner

    1992-08-01

    A linearized, one-dimensional shallow water model is used to investigate the effect of model errors in four-dimensional variational assimilation. A suitable initialization scheme for variational assimilation is proposed. Introducing deliberate phase speed errors in the model, the results from variational assimilation are compared to standard analysis/forecast cycle experiments. While the latter draws to the data and reflects the model errors only in the datavoid areas, variational assimilation with the model used as strong constraint is shown to distribute the model errors over the entire analysis domain. The implications for verification and diagnostics are discussed. Temporal weighting of the observations can reduce the errors towards the end of the assimilation period, but may deteriorate the subsequent forecasts. An extension to variational assimilation is proposed, which seeks not only to determine the initial state from the observations but also some of the tunable parameters of the model. The potentional usefulness of this approach for parameterization studies and for a separation of forecast errors into model- and analysis errors is discussed. Finally, variational assimilations with the model used as weak constraint are presented. While showing a good performance in the assimilation, forecasts can suffer severely if the extra term in the equations up to which the model is enforced are unable to compensate for the real model error. In the discussion, an overall appraisal of both assimilation methods is given.

  20. NASA Model of "Threat and Error" in Pediatric Cardiac Surgery: Patterns of Error Chains.

    Science.gov (United States)

    Hickey, Edward; Pham-Hung, Eric; Nosikova, Yaroslavna; Halvorsen, Fredrik; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Van Arsdell, Glen

    2017-04-01

    We introduced the National Aeronautics and Space Association threat-and-error model to our surgical unit. All admissions are considered flights, which should pass through stepwise deescalations in risk during surgical recovery. We hypothesized that errors significantly influence risk deescalation and contribute to poor outcomes. Patient flights (524) were tracked in real time for threats, errors, and unintended states by full-time performance personnel. Expected risk deescalation was wean from mechanical support, sternal closure, extubation, intensive care unit (ICU) discharge, and discharge home. Data were accrued from clinical charts, bedside data, reporting mechanisms, and staff interviews. Infographics of flights were openly discussed weekly for consensus. In 12% (64 of 524) of flights, the child failed to deescalate sequentially through expected risk levels; unintended increments instead occurred. Failed deescalations were highly associated with errors (426; 257 flights; p < 0.0001). Consequential errors (263; 173 flights) were associated with a 29% rate of failed deescalation versus 4% in flights with no consequential error (p < 0.0001). The most dangerous errors were apical errors typically (84%) occurring in the operating room, which caused chains of propagating unintended states (n = 110): these had a 43% (47 of 110) rate of failed deescalation (versus 4%; p < 0.0001). Chains of unintended state were often (46%) amplified by additional (up to 7) errors in the ICU that would worsen clinical deviation. Overall, failed deescalations in risk were extremely closely linked to brain injury (n = 13; p < 0.0001) or death (n = 7; p < 0.0001). Deaths and brain injury after pediatric cardiac surgery almost always occur from propagating error chains that originate in the operating room and are often amplified by additional ICU errors. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  2. Spatial Distribution of the Errors in Modeling the Mid-Latitude Critical Frequencies by Different Models

    Science.gov (United States)

    Kilifarska, N. A.

    There are some models that describe the spatial distribution of greatest frequency yielding reflection from the F2 ionospheric layer (foF2). However, the distribution of the models' errors over the globe and how they depend on seasons, solar activity, etc., are unknown till this time. So the aim of the present paper is to compare the accuracy in describing the latitudinal and longitudinal variation of the mid-latitude maximum electron density, of CCIR, URSI, and a new created theoretical model. A comparison between the above mentioned models and all available from Boulder's data bank VI data (among 35 deg and 70 deg) have been made. Data for three whole years with different solar activity - 1976 (F_10.7 = 73.6), 1981 (F_10.7 = 20.6), 1983 (F_10.7 = 119.6) have been compared. The final results show that: 1. the areas with greatest and smallest errors depend on UT, season and solar activity; 2. the error distribution of CCIR and URSI models are very similar and are not coincident with these ones of theoretical model. The last result indicates that the theoretical model, described briefly bellow, may be a real alternative to the empirical CCIR and URSI models. The different spatial distribution of the models' errors gives a chance for the users to choose the most appropriate model, depending on their needs. Taking into account that the theoretical models have equal accuracy in region with many or without any ionosonde station, this result shows that our model can be used to improve the global mapping of the mid-latitude ionosphere. Moreover, if Re values of the input aeronomical parameters (neutral composition, temperatures and winds), are used - it may be expected that this theoretical model can be applied for Re or almost Re-time mapping of the main ionospheric parameters (foF2 and hmF2).

  3. Measurement of Laser Weld Temperatures for 3D Model Input.

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl; GROSSETETE, GRANT; Maccallum, Danny O.

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  4. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  5. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  6. Statistical analysis of error propagation from radar rainfall to hydrological models

    Directory of Open Access Journals (Sweden)

    D. Zhu

    2013-04-01

    Full Text Available This study attempts to characterise the manner with which inherent error in radar rainfall estimates input influence the character of the stream flow simulation uncertainty in validated hydrological modelling. An artificial statistical error model described by Gaussian distribution was developed to generate realisations of possible combinations of normalised errors and normalised bias to reflect the identified radar error and temporal dependence. These realisations were embedded in the 5 km/15 min UK Nimrod radar rainfall data and used to generate ensembles of stream flow simulations using three different hydrological models with varying degrees of complexity, which consists of a fully distributed physically-based model MIKE SHE, a semi-distributed, lumped model TOPMODEL and the unit hydrograph model PRTF. These models were built for this purpose and applied to the Upper Medway Catchment (220 km2 in South-East England. The results show that the normalised bias of the radar rainfall estimates was enhanced in the simulated stream flow and also the dominate factor that had a significant impact on stream flow simulations. This preliminary radar-error-generation model could be developed more rigorously and comprehensively for the error characteristics of weather radars for quantitative measurement of rainfall.

  7. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  8. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  9. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  10. Input determination for neural network models in water resources applications. Part 2. Case study: forecasting salinity in a river

    Science.gov (United States)

    Bowden, Gavin J.; Maier, Holger R.; Dandy, Graeme C.

    2005-01-01

    This paper is the second of a two-part series in this issue that presents a methodology for determining an appropriate set of model inputs for artificial neural network (ANN) models in hydrologic applications. The first paper presented two input determination methods. The first method utilises a measure of dependence known as the partial mutual information (PMI) criterion to select significant model inputs. The second method utilises a self-organising map (SOM) to remove redundant input variables, and a hybrid genetic algorithm (GA) and general regression neural network (GRNN) to select the inputs that have a significant influence on the model's forecast. In the first paper, both methods were applied to synthetic data sets and were shown to lead to a set of appropriate ANN model inputs. To verify the proposed techniques, it is important that they are applied to a real-world case study. In this paper, the PMI algorithm and the SOM-GAGRNN are used to find suitable inputs to an ANN model for forecasting salinity in the River Murray at Murray Bridge, South Australia. The proposed methods are also compared with two methods used in previous studies, for the same case study. The two proposed methods were found to lead to more parsimonious models with a lower forecasting error than the models developed using the methods from previous studies. To verify the robustness of each of the ANNs developed using the proposed methodology, a real-time forecasting simulation was conducted. This validation data set consisted of independent data from a six-year period from 1992 to 1998. The ANN developed using the inputs identified by the stepwise PMI algorithm was found to be the most robust for this validation set. The PMI scores obtained using the stepwise PMI algorithm revealed useful information about the order of importance of each significant input.

  11. Input modeling with phase-type distributions and Markov models theory and applications

    CERN Document Server

    Buchholz, Peter; Felko, Iryna

    2014-01-01

    Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...

  12. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    Science.gov (United States)

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  13. Error Control of Iterative Linear Solvers for Integrated Groundwater Models

    CERN Document Server

    Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq

    2010-01-01

    An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...

  14. Bayesian modeling growth curves for quail assuming skewness in errors

    Directory of Open Access Journals (Sweden)

    Robson Marcelo Rossi

    2014-06-01

    Full Text Available Bayesian modeling growth curves for quail assuming skewness in errors - To assume normal distributions in the data analysis is common in different areas of the knowledge. However we can make use of the other distributions that are capable to model the skewness parameter in the situations that is needed to model data with tails heavier than the normal. This article intend to present alternatives to the assumption of the normality in the errors, adding asymmetric distributions. A Bayesian approach is proposed to fit nonlinear models when the errors are not normal, thus, the distributions t, skew-normal and skew-t are adopted. The methodology is intended to apply to different growth curves to the quail body weights. It was found that the Gompertz model assuming skew-normal errors and skew-t errors, respectively for male and female, were the best fitted to the data.

  15. Correcting biased observation model error in data assimilation

    CERN Document Server

    Harlim, John

    2016-01-01

    While the formulation of most data assimilation schemes assumes an unbiased observation model error, in real applications, model error with nontrivial biases is unavoidable. A practical example is the error in the radiative transfer model (which is used to assimilate satellite measurements) in the presence of clouds. As a consequence, many (in fact 99\\%) of the cloudy observed measurements are not being used although they may contain useful information. This paper presents a novel nonparametric Bayesian scheme which is able to learn the observation model error distribution and correct the bias in incoming observations. This scheme can be used in tandem with any data assimilation forecasting system. The proposed model error estimator uses nonparametric likelihood functions constructed with data-driven basis functions based on the theory of kernel embeddings of conditional distributions developed in the machine learning community. Numerically, we show positive results with two examples. The first example is des...

  16. Error Model and Accuracy Calibration of 5-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Fangyu Pan

    2013-08-01

    Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.

  17. Modelling Analysis of Forestry Input-Output Elasticity in China

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2016-01-01

    Full Text Available Based on an extended economic model and space econometrics, this essay analyzed the spatial distributions and interdependent relationships of the production of forestry in China; also the input-output elasticity of forestry production were calculated. Results figure out there exists significant spatial correlation in forestry production in China. Spatial distribution is mainly manifested as spatial agglomeration. The output elasticity of labor force is equal to 0.6649, and that of capital is equal to 0.8412. The contribution of land is significantly negative. Labor and capital are the main determinants for the province-level forestry production in China. Thus, research on the province-level forestry production should not ignore the spatial effect. The policy-making process should take into consideration the effects between provinces on the production of forestry. This study provides some scientific technical support for forestry production.

  18. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  19. Predictive error analysis for a water resource management model

    Science.gov (United States)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  20. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  1. A Morphographemic Model for Error Correction in Nonconcatenative Strings

    CERN Document Server

    Bowden, T; Bowden, Tanya; Kiraz, George Anton

    1995-01-01

    This paper introduces a spelling correction system which integrates seamlessly with morphological analysis using a multi-tape formalism. Handling of various Semitic error problems is illustrated, with reference to Arabic and Syriac examples. The model handles errors vocalisation, diacritics, phonetic syncopation and morphographemic idiosyncrasies, in addition to Damerau errors. A complementary correction strategy for morphologically sound but morphosyntactically ill-formed words is outlined.

  2. FMEA: a model for reducing medical errors.

    Science.gov (United States)

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  3. Input--output capital coefficients for energy technologies. [Input-output model

    Energy Technology Data Exchange (ETDEWEB)

    Tessmer, R.G. Jr.

    1976-12-01

    Input-output capital coefficients are presented for five electric and seven non-electric energy technologies. They describe the durable goods and structures purchases (at a 110 sector level of detail) that are necessary to expand productive capacity in each of twelve energy source sectors. Coefficients are defined in terms of 1967 dollar purchases per 10/sup 6/ Btu of output from new capacity, and original data sources include Battelle Memorial Institute, the Harvard Economic Research Project, The Mitre Corp., and Bechtel Corp. The twelve energy sectors are coal, crude oil and gas, shale oil, methane from coal, solvent refined coal, refined oil products, pipeline gas, coal combined-cycle electric, fossil electric, LWR electric, HTGR electric, and hydroelectric.

  4. Parameter estimation and error analysis in environmental modeling and computation

    Science.gov (United States)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  5. Filtering multiscale dynamical systems in the presence of model error

    CERN Document Server

    Harlim, John

    2013-01-01

    In this review article, we report two important competing data assimilation schemes that were developed in the past 20 years, discuss the current methods that are operationally used in weather forecasting applications, and point out one major challenge in data assimilation community: "utilize these existing schemes in the presence of model error". The aim of this paper is to provide theoretical guidelines to mitigate model error in practical applications of filtering multiscale dynamical systems with reduced models. This is a prototypical situation in many applications due to limited ability to resolve the smaller scale processes as well as the difficulty to model the interaction across scales. We present simple examples to point out the importance of accounting for model error when the separation of scales are not apparent. These examples also elucidate the necessity of treating model error as a stochastic process in a nontrivial fashion for optimal filtering, in the sense that the mean and covariance estima...

  6. ASYMPTOTICS OF MEAN TRANSFORMATION ESTIMATORS WITH ERRORS IN VARIABLES MODEL

    Institute of Scientific and Technical Information of China (English)

    CUI Hengjian

    2005-01-01

    This paper addresses estimation and its asymptotics of mean transformation θ = E[h(X)] of a random variable X based on n iid. Observations from errors-in-variables model Y = X + v, where v is a measurement error with a known distribution and h(.) is a known smooth function. The asymptotics of deconvolution kernel estimator for ordinary smooth error distribution and expectation extrapolation estimator are given for normal error distribution respectively. Under some mild regularity conditions, the consistency and asymptotically normality are obtained for both type of estimators. Simulations show they have good performance.

  7. On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model

    CERN Document Server

    Prasad, K

    2010-01-01

    Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...

  8. Error Model and Compensation of Bell-Shaped Vibratory Gyro

    Directory of Open Access Journals (Sweden)

    Zhong Su

    2015-09-01

    Full Text Available A bell-shaped vibratory angular velocity gyro (BVG, inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement.

  9. Effect Of Oceanic Lithosphere Age Errors On Model Discrimination

    Science.gov (United States)

    DeLaughter, J. E.

    2016-12-01

    The thermal structure of the oceanic lithosphere is the subject of a long-standing controversy. Because the thermal structure varies with age, it governs properties such as heat flow, density, and bathymetry with important implications for plate tectonics. Though bathymetry, geoid, and heat flow for young (geoid, and heat flow data to an inverse model to determine lithospheric structure details. Though inverse models usually include the effect of errors in bathymetry, heat flow, and geoid, they rarely examine the effects of errors in age. This may have the effect of introducing subtle biases into inverse models of the oceanic lithosphere. Because the inverse problem for thermal structure is both ill-posed and ill-conditioned, these overlooked errors may have a greater effect than expected. The problem is further complicated by the non-uniform distribution of age and errors in age estimates; for example, only 30% of the oceanic lithosphere is older than 80 MY and less than 3% is older than 150 MY. To determine the potential strength of such biases, I have used the age and error maps of Mueller et al (2008) to forward model the bathymetry for half space and GDH1 plate models. For ages less than 20 MY, both models give similar results. The errors induced by uncertainty in age are relatively large and suggest that when possible young lithosphere should be excluded when examining the lithospheric thermal model. As expected, GDH1 bathymetry converges asymptotically on the theoretical result for error-free data for older data. The resulting uncertainty is nearly as large as that introduced by errors in the other parameters; in the absence of other errors, the models can only be distinguished for ages greater than 80 MY. These results suggest that the problem should be approached with the minimum possible number of variables. For example, examining the direct relationship of geoid to bathymetry or heat flow instead of their relationship to age should reduce uncertainties

  10. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Wang

    2011-03-01

    Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.

  11. A three-component model of the control error in manual tracking of continuous random signals.

    Science.gov (United States)

    Gerisch, Hans; Staude, Gerhard; Wolf, Werner; Bauch, Gerhard

    2013-10-01

    The performance of human operators acting within closed-loop control systems is investigated in a classic tracking task. The dependence of the control error (tracking error) on the parameters display gain, k(display), and input signal frequency bandwidth, f(g), which alter task difficulty and presumably the control delay, is studied with the aim of functionally specifying it via a model. The human operator as an element of a cascaded human-machine control system (e.g., car driving or piloting an airplane) codetermines the overall system performance. Control performance of humans in continuous tracking has been described in earlier studies. Using a handheld joystick, 10 participants tracked continuous random input signals. The parameters f(g) and k(display) were altered between experiments. Increased task difficulty promoted lengthened control delay and, consequently, increased control error.Tracking performance degraded profoundly with target deflection components above 1 Hz, confirming earlier reports. The control error is composed of a delay-induced component, a demand-based component, and a novel component: a human tracking limit. Accordingly, a new model that allows concepts of the observed control error to be split into these three components is suggested. To achieve optimal performance in control systems that include a human operator (e.g., vehicles, remote controlled rovers, crane control), (a) tasks should be kept as simple as possible to achieve shortest control delays, and (b) task components requiring higher-frequency (> 1 Hz) tracking actions should be avoided or automated by technical systems.

  12. A Comprehensive Trainable Error Model for Sung Music Queries

    CERN Document Server

    Birmingham, W P; 10.1613/jair.1334

    2011-01-01

    We propose a model for errors in sung queries, a variant of the hidden Markov model (HMM). This is a solution to the problem of identifying the degree of similarity between a (typically error-laden) sung query and a potential target in a database of musical works, an important problem in the field of music information retrieval. Similarity metrics are a critical component of query-by-humming (QBH) applications which search audio and multimedia databases for strong matches to oral queries. Our model comprehensively expresses the types of error or variation between target and query: cumulative and non-cumulative local errors, transposition, tempo and tempo changes, insertions, deletions and modulation. The model is not only expressive, but automatically trainable, or able to learn and generalize from query examples. We present results of simulations, designed to assess the discriminatory potential of the model, and tests with real sung queries, to demonstrate relevance to real-world applications.

  13. Which forcing data errors matter most when modeling seasonal snowpacks?

    Science.gov (United States)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2014-12-01

    High quality forcing data are critical when modeling seasonal snowpacks and snowmelt, but their quality is often compromised due to measurement errors or deficiencies in gridded data products (e.g., spatio-temporal interpolation, empirical parameterizations, or numerical weather model outputs). To assess the relative impact of errors in different meteorological forcings, many studies have conducted sensitivity analyses where errors (e.g., bias) are imposed on one forcing at a time and changes in model output are compared. Although straightforward, this approach only considers simplistic error structures and cannot quantify interactions in different meteorological forcing errors (i.e., it assumes a linear system). Here we employ the Sobol' method of global sensitivity analysis, which allows us to test how co-existing errors in six meteorological forcings (i.e., air temperature, precipitation, wind speed, humidity, incoming shortwave and longwave radiation) impact specific modeled snow variables (i.e., peak snow water equivalent, snowmelt rates, and snow disappearance timing). Using the Sobol' framework across a large number of realizations (>100000 simulations annually at each site), we test how (1) the type (e.g., bias vs. random errors), (2) distribution (e.g., uniform vs. normal), and (3) magnitude (e.g., instrument uncertainty vs. field uncertainty) of forcing errors impact key outputs from a physically based snow model (the Utah Energy Balance). We also assess the role of climate by conducting the analysis at sites in maritime, intermountain, continental, and tundra snow zones. For all outputs considered, results show that (1) biases in forcing data are more important than random errors, (2) the choice of error distribution can enhance the importance of specific forcings, and (3) the level of uncertainty considered dictates the relative importance of forcings. While the relative importance of forcings varied with snow variable and climate, the results broadly

  14. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    Science.gov (United States)

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Fractionally Integrated Models With ARCH Errors

    OpenAIRE

    Hauser, Michael A.; Kunst, Robert M.

    1993-01-01

    Abstract: We introduce ARFIMA-ARCH models which simultaneously incorporate fractional differencing and conditional heteroskedasticity. We develop the likelihood function and a numerical estimation procedure for this model class. Two ARCH models - Engle- and Weiss-type - are explicitly treated and stationarity conditions are derived. Finite-sample properties of the estimation procedure are explored by Monte Carlo simulation. An application to the Standard & Poor 500 Index indicates existence o...

  16. Effect of GPS errors on Emission model

    DEFF Research Database (Denmark)

    Lehmann, Anders; Gross, Allan

    n this paper we will show how Global Positioning Services (GPS) data obtained from smartphones can be used to model air quality in urban settings. The paper examines the uncertainty of smartphone location utilising GPS, and ties this location uncertainty to air quality models. The results presented...

  17. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.

  18. Reducing RANS Model Error Using Random Forest

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  19. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    Science.gov (United States)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data

  20. Quantifying model structural error: Efficient Bayesian calibration of a regional groundwater flow model using surrogates and a data-driven error model

    Science.gov (United States)

    Xu, Tianfang; Valocchi, Albert J.; Ye, Ming; Liang, Feng

    2017-05-01

    Groundwater model structural error is ubiquitous, due to simplification and/or misrepresentation of real aquifer systems. During model calibration, the basic hydrogeological parameters may be adjusted to compensate for structural error. This may result in biased predictions when such calibrated models are used to forecast aquifer responses to new forcing. We investigate the impact of model structural error on calibration and prediction of a real-world groundwater flow model, using a Bayesian method with a data-driven error model to explicitly account for model structural error. The error-explicit Bayesian method jointly infers model parameters and structural error and thereby reduces parameter compensation. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models (based on machine learning techniques) as a substitute for the computationally expensive groundwater model. We demonstrate that with explicit treatment of model structural error, the Bayesian method yields parameter posterior distributions that are substantially different from those derived using classical Bayesian calibration that does not account for model structural error. We also found that the error-explicit Bayesian method gives significantly more accurate prediction along with reasonable credible intervals. Finally, through variance decomposition, we provide a comprehensive assessment of prediction uncertainty contributed from parameter, model structure, and measurement uncertainty. The results suggest that the error-explicit Bayesian approach provides a solution to real-world modeling applications for which data support the presence of model structural error, yet model deficiency cannot be specifically identified or corrected.

  1. The stability of input structures in a supply-driven input-output model: A regional analysis

    Energy Technology Data Exchange (ETDEWEB)

    Allison, T.

    1994-06-01

    Disruptions in the supply of strategic resources or other crucial factor inputs often present significant problems for planners and policymakers. The problem may be particularly significant at the regional level where higher levels of product specialization mean supply restrictions are more likely to affect leading regional industries. To maintain economic stability in the event of a supply restriction, regional planners may therefore need to evaluate the importance of market versus non-market systems for allocating the remaining supply of the disrupted resource to the region`s leading consuming industries. This paper reports on research that has attempted to show that large short term changes on the supply side do not lead to substantial changes in input coefficients and do not therefore mean the abandonment of the concept of the production function as has been suggested (Oosterhaven, 1988). The supply-driven model was tested for six sectors of the economy of Washington State and found to yield new input coefficients whose values were in most cases close approximations of their original values, even with substantial changes in supply. Average coefficient changes from a 50% output reduction in these six sectors were in the vast majority of cases (297 from a total of 315) less than +2.0% of their original values, excluding coefficient changes for the restricted input. Given these small changes, the most important issue for the validity of the supply-driven input-output model may therefore be the empirical question of the extent to which these coefficient changes are acceptable as being within the limits of approximation.

  2. Multiscale measurement error models for aggregated small area health data.

    Science.gov (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.

  3. Error detection and rectification in digital terrain models

    Science.gov (United States)

    Hannah, M. J.

    1979-01-01

    Digital terrain models produced by computer correlation of stereo images are likely to contain occasional gross errors in terrain elevation. These errors typically result from having mismatched sub-areas of the two images, a problem which can occur for a variety of image- and terrain-related reasons. Such elevation errors produce undesirable effects when the models are further processed, and should be detected and corrected as early in the processing as possible. Algorithms have been developed to detect and correct errors in digital terrain models. These algorithms focus on the use of constraints on both the allowable slope and the allowable change in slope in local areas around each point. Relaxation-like techniques are employed in the iteration of the detection and correction phases to obtain best results.

  4. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    Science.gov (United States)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  5. Identification of coefficients in platform drift error model

    Institute of Scientific and Technical Information of China (English)

    邓正隆; 徐松艳; 付振宪

    2002-01-01

    The identification of the coefficients in the drift error model of a floated gyro inertial nawgation plat-form was investigated by following the principle of the inertial navigation platform and using gyro and accelerom-eter output models, and a complete platform drift error model was established, with parameters as state varia-bles, thereby establishing the system state equation and observation equation. Since these two equations areboth nonlinear, the Extended Kalman Filter (EKF) was adopted. Then the problem of parameter identificationwas converted into a problem of state estimation. During the simulation, multi-position testing schemes were de-signed to motivated the parameters by gravity acceleration. Using these schemes, twenty-four error coefficientsof three gyros and six error coefficients of three accelerometers were identified, which showed the feasibility ofthis method.

  6. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... "preferred" GIA model has been used, without any consideration of the possible errors involved. Lacking a rigorous assessment of systematic errors in GIA modeling, the reliability of the results is uncertain. GIA sensitivity and uncertainties associated with the viscosity models have been explored......, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  7. Discrete choice models with multiplicative error terms

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Bierlaire, Michel

    2009-01-01

    differences. We develop some properties of this type of model and show that in several cases the change from an additive to a multiplicative formulation, maintaining a specification of V, may lead to a large improvement in fit, sometimes larger than that gained from introducing random coefficients in V....

  8. A pre-calibration approach to select optimum inputs for hydrological models in data-scarce regions

    Science.gov (United States)

    Tarawneh, Esraa; Bridge, Jonathan; Macdonald, Neil

    2016-10-01

    This study uses the Soil and Water Assessment Tool (SWAT) model to quantitatively compare available input datasets in a data-poor dryland environment (Wala catchment, Jordan; 1743 km2). Eighteen scenarios combining best available land-use, soil and weather datasets (1979-2002) are considered to construct SWAT models. Data include local observations and global reanalysis data products. Uncalibrated model outputs assess the variability in model performance derived from input data sources only. Model performance against discharge and sediment load data are compared using r2, Nash-Sutcliffe efficiency (NSE), root mean square error standard deviation ratio (RSR) and percent bias (PBIAS). NSE statistic varies from 0.56 to -12 and 0.79 to -85 for best- and poorest-performing scenarios against observed discharge and sediment data respectively. Global weather inputs yield considerable improvements on discontinuous local datasets, whilst local soil inputs perform considerably better than global-scale mapping. The methodology provides a rapid, transparent and transferable approach to aid selection of the most robust suite of input data.

  9. Background Error Correlation Modeling with Diffusion Operators

    Science.gov (United States)

    2013-01-01

    functions defined on the orthogonal curvilin- ear grid of the Navy Coastal Ocean Model (NCOM) [28] set up in the Monterrey Bay (Fig. 4). The number N...H2 = [1 1; 1−1], the HMs with order N = 2n, n= 1,2... can be easily constructed. HMs with N = 12,20 were constructed ” manually ” more than a century

  10. Scaling precipitation input to distributed hydrological models by measured snow distribution

    Science.gov (United States)

    Voegeli, Christian; Lehning, Michael; Wever, Nander; Bavay, Mathias; Bühler, Yves; Marty, Mauro; Molnar, Peter

    2016-04-01

    Precise knowledge about the snow distribution in alpine terrain is crucial for various applications such as flood risk assessment, avalanche warning or water supply and hydropower. To simulate the seasonal snow cover development in alpine terrain, the spatially distributed, physics-based model Alpine3D is suitable. The model is often driven by spatial interpolations from automatic weather stations (AWS). As AWS are sparsely spread, the data needs to be interpolated, leading to errors in the spatial distribution of the snow cover - especially on subcatchment scale. With the recent advances in remote sensing techniques, maps of snow depth can be acquired with high spatial resolution and vertical accuracy. Here we use maps of the snow depth distribution, calculated from summer and winter digital surface models acquired with the airborne opto-electronic scanner ADS to preprocess and redistribute precipitation input data for Alpine3D to improve the accuracy of spatial distribution of snow depth simulations. A differentiation between liquid and solid precipitation is made, to account for different precipitation patterns that can be expected from rain and snowfall. For liquid precipitation, only large scale distribution patterns are applied to distribute precipitation in the simulation domain. For solid precipitation, an additional small scale distribution, based on the ADS data, is applied. The large scale patterns are generated using AWS measurements interpolated over the domain. The small scale patterns are generated by redistributing the large scale precipitation according to the relative snow depth in the ADS dataset. The determination of the precipitation phase is done using an air temperature threshold. Using this simple approach to redistribute precipitation, the accuracy of spatial snow distribution could be improved significantly. The standard deviation of absolute snow depth error could be reduced by a factor of 2 to less than 20 cm for the season 2011/12. The

  11. 输入非线性方程误差自回归系统的多新息辨识方法%Multi-innovation identification methods for input nonlinear equation-error autoregressive systems

    Institute of Scientific and Technical Information of China (English)

    丁锋; 毛亚文

    2015-01-01

    Typical block⁃oriented structure nonlinear systems include the basic input nonlinear systems,the output nonlinear systems,the input⁃output nonlinear systems and the feedback nonlinear systems. The input nonlinear sys⁃tems include the input nonlinear equation⁃error type systems and the input nonlinear output⁃error type systems.Tak⁃ing the input nonlinear equation⁃error autoregressive systems ( namely the input nonlinear controlled autoregressive autoregressive ( IN⁃CARAR) systems as an example,this paper studies and presents stochastic gradient ( SG) iden⁃tification methods,multi⁃innovation SG methods,recursive least squares ( LS) identification methods and multi⁃inno⁃vation LS identification methods for IN⁃CARAR systems based on the over⁃parameterization model,the key term sep⁃aration principle and the data filtering technique, the model decomposition technique. These methods can be extended to other input nonlinear equation⁃error systems,input nonlinear output⁃error type systems,output nonlinear equation⁃error type systems and output nonlinear output⁃error systems,and feedback nonlinear systems. Finally,the computational efficiency,the computational steps and the flowcharts of several typical identification algorithms are discussed.%典型块结构非线性系统包括基本的输入非线性系统、输出非线性系统、输入输出非线性系统、反馈非线性系统等。输入非线性系统包括输入非线性方程误差类系统和输入非线性输出误差类系统。以输入非线性方程误差自回归系统,即输入非线性受控自回归自回归( IN⁃CAR⁃AR)系统为例,分别基于过参数化模型,基于关键项分离原理,基于数据滤波技术以及基于辨识模型分解技术,研究和提出了IN⁃CARAR系统的随机梯度辨识方法、多新息随机梯度辨识方法、递推最小二乘辨识方法、多新息最小二乘辨识方法。这些方法可以推广到其

  12. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    Science.gov (United States)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  13. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  14. Forecasting the Euro exchange rate using vector error correction models

    NARCIS (Netherlands)

    Aarle, B. van; Bos, M.; Hlouskova, J.

    2000-01-01

    Forecasting the Euro Exchange Rate Using Vector Error Correction Models. — This paper presents an exchange rate model for the Euro exchange rates of four major currencies, namely the US dollar, the British pound, the Japanese yen and the Swiss franc. The model is based on the monetary approach of ex

  15. VQ-based model for binary error process

    Science.gov (United States)

    Csóka, Tibor; Polec, Jaroslav; Csóka, Filip; Kotuliaková, Kvetoslava

    2017-05-01

    A variety of complex techniques, such as forward error correction (FEC), automatic repeat request (ARQ), hybrid ARQ or cross-layer optimization, require in their design and optimization phase a realistic model of binary error process present in a specific digital channel. Past and more recent modeling approaches focus on capturing one or more stochastic characteristics with precision sufficient for the desired model application, thereby applying concepts and methods severely limiting the model applicability (eg in the form of modeled process prerequisite expectations). The proposed novel concept utilizing a Vector Quantization (VQ)-based approach to binary process modeling offers a viable alternative capable of superior modeling of most commonly observed small- and large-scale stochastic characteristics of a binary error process on the digital channel. Precision of the proposed model was verified using multiple statistical distances against the data captured in a wireless sensor network logical channel trace. Furthermore, the Pearson's goodness of fit test of all model variants' output was performed to conclusively demonstrate usability of the model for realistic captured binary error process. Finally, the presented results prove the proposed model applicability and its ability to far surpass the capabilities of the reference Elliot's model.

  16. Thermal Error Modelling of the Spindle Using Neurofuzzy Systems

    OpenAIRE

    Jingan Feng; Xiaoqi Tang; Yanlei Li; Bao Song

    2016-01-01

    This paper proposes a new combined model to predict the spindle deformation, which combines the grey models and the ANFIS (adaptive neurofuzzy inference system) model. The grey models are used to preprocess the original data, and the ANFIS model is used to adjust the combined model. The outputs of the grey models are used as the inputs of the ANFIS model to train the model. To evaluate the performance of the combined model, an experiment is implemented. Three Pt100 thermal resistances are use...

  17. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the rege......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  18. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    Energy Technology Data Exchange (ETDEWEB)

    Aljneibi, Hanan Salah Ali [Khalifa Univ., Abu Dhabi (United Arab Emirates); Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation.

  19. Comparative study and error analysis of digital elevation model interpolations

    Institute of Scientific and Technical Information of China (English)

    CHEN Ji-long; WU Wei; LIU Hong-bin

    2008-01-01

    Researchers in P.R.China commonly create triangulate irregular networks (TINs) from contours and then convert TINs into digital elevation models (DEMs). However, the DEM produced by this method can not precisely describe and simulate key hydrological features such as rivers and drainage borders. Taking a hilly region in southwestern China as a research area and using ArcGISTM software, we analyzed the errors of different interpolations to obtain distributions of the errors and precisions of different algorithms and to provide references for DEM productions. The results show that different interpolation errors satisfy normal distributions, and large error exists near the structure line of the terrain. Furthermore, the results also show that the precision of a DEM interpolated with the Australian National University digital elevation model (ANUDEM) is higher than that interpolated with TIN. The DEM interpolated with TIN is acceptable for generating DEMs in the hilly region of southwestern China.

  20. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    Science.gov (United States)

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  1. Research on identifying the dynamic error model of strapdown gyro on 3-axis turntable

    Institute of Scientific and Technical Information of China (English)

    WANG Hai; REN Shun-qing; WANG Chang-hong

    2005-01-01

    The dynamic errors of gyros are the important error sources of a strapdown inertial navigation system.In order to identify the dynamic error model coefficients accurately, the static erTor model coefficients which lay a foundation for compensating while identifying the dynamic error model are identified in the gravity acceleration fields by using angular position function of the three-axis turntable. The angular acceleration and angular velocity are excited on the input, output and spin axis of the gyros when the outer axis and the middle axis of a threeaxis turntable are in the uniform angular velocity state simultaneously, while the inner axis of the turntable is in different static angular positions. 8 groups of data are sampled when the inner axis is in 8 different angular positions. These data are the function of the middle axis positions and the inner axis positions. For these data, harmonic analysis method is applied two times versus the middle axis positions and inner axis positions respectively so that the dynamic error model coefficients are finally identified through the least square method. In the meantime the optimal angular velocity of the outer axis and the middle axis are selected by computing the determination value of the information matrix.

  2. Wage Differentials among Workers in Input-Output Models.

    Science.gov (United States)

    Filippini, Luigi

    1981-01-01

    Using an input-output framework, the author derives hypotheses on wage differentials based on the assumption that human capital (in this case, education) will explain workers' wage differentials. The hypothetical wage differentials are tested on data from the Italian economy. (RW)

  3. A model for navigational errors in complex environmental fields.

    Science.gov (United States)

    Postlethwaite, Claire M; Walker, Michael M

    2014-12-21

    Many animals are believed to navigate using environmental signals such as light, sound, odours and magnetic fields. However, animals rarely navigate directly to their target location, but instead make a series of navigational errors which are corrected during transit. In previous work, we introduced a model showing that differences between an animal׳s 'cognitive map' of the environmental signals used for navigation and the true nature of these signals caused a systematic pattern in orientation errors when navigation begins. The model successfully predicted the pattern of errors seen in previously collected data from homing pigeons, but underestimated the amplitude of the errors. In this paper, we extend our previous model to include more complicated distortions of the contour lines of the environmental signals. Specifically, we consider the occurrence of critical points in the fields describing the signals. We consider three scenarios and compute orientation errors as parameters are varied in each case. We show that the occurrence of critical points can be associated with large variations in initial orientation errors over a small geographic area. We discuss the implications that these results have on predicting how animals will behave when encountering complex distortions in any environmental signals they use to navigate.

  4. A cumulative entropy method for distribution recognition of model error

    Science.gov (United States)

    Liang, Yingjie; Chen, Wen

    2015-02-01

    This paper develops a cumulative entropy method (CEM) to recognize the most suitable distribution for model error. In terms of the CEM, the Lévy stable distribution is employed to capture the statistical properties of model error. The strategies are tested on 250 experiments of axially loaded CFT steel stub columns in conjunction with the four national building codes of Japan (AIJ, 1997), China (DL/T, 1999), the Eurocode 4 (EU4, 2004), and United States (AISC, 2005). The cumulative entropy method is validated as more computationally efficient than the Shannon entropy method. Compared with the Kolmogorov-Smirnov test and root mean square deviation, the CEM provides alternative and powerful model selection criterion to recognize the most suitable distribution for the model error.

  5. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    GIA modeling. GIA errors are also important in the far field of previously glaciated areas and in the time evolution of global indicators. In this regard we also account for other possible errors sources which can impact global indicators like the sea level history related to GIA. The thermal......During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... in the literature. However, at least two major sources of errors remain. The first is associated with the ice models, spatial distribution of ice and history of melting (this is especially the case of Antarctica), the second with the numerical implementation of model features relevant to sea level modeling...

  6. High Temperature Test Facility Preliminary RELAP5-3D Input Model Description

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul David [Idaho National Laboratory

    2015-12-01

    A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.

  7. Data Quality in Linear Regression Models: Effect of Errors in Test Data and Errors in Training Data on Predictive Accuracy

    Directory of Open Access Journals (Sweden)

    Barbara D. Klein

    1999-01-01

    Full Text Available Although databases used in many organizations have been found to contain errors, little is known about the effect of these errors on predictions made by linear regression models. The paper uses a real-world example, the prediction of the net asset values of mutual funds, to investigate the effect of data quality on linear regression models. The results of two experiments are reported. The first experiment shows that the error rate and magnitude of error in data used in model prediction negatively affect the predictive accuracy of linear regression models. The second experiment shows that the error rate and the magnitude of error in data used to build the model positively affect the predictive accuracy of linear regression models. All findings are statistically significant. The findings have managerial implications for users and builders of linear regression models.

  8. Development of ANFIS models for air quality forecasting and input optimization for reducing the computational cost and time

    Science.gov (United States)

    Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila

    2016-03-01

    This study aims to develop adaptive neuro-fuzzy inference system (ANFIS) for forecasting of daily air pollution concentrations of five air pollutants [sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3) and particular matters (PM10)] in the atmosphere of a Megacity (Howrah). Air pollution in the city (Howrah) is rising in parallel with the economics and thus observing, forecasting and controlling the air pollution becomes increasingly important due to the health impact. ANFIS serve as a basis for constructing a set of fuzzy IF-THEN rules, with appropriate membership functions to generate the stipulated input-output pairs. The ANFIS model predictor considers the value of meteorological factors (pressure, temperature, relative humidity, dew point, visibility, wind speed, and precipitation) and previous day's pollutant concentration in different combinations as the inputs to predict the 1-day advance and same day air pollution concentration. The concentration value of five air pollutants and seven meteorological parameters of the Howrah city during the period 2009 to 2011 were used for development of the ANFIS model. Collinearity tests were conducted to eliminate the redundant input variables. A forward selection (FS) method is used for selecting the different subsets of input variables. Application of collinearity tests and FS techniques reduces the numbers of input variables and subsets which helps in reducing the computational cost and time. The performances of the models were evaluated on the basis of four statistical indices (coefficient of determination, normalized mean square error, index of agreement, and fractional bias).

  9. A method of aggregating heterogeneous subgrid land cover input data for multi-scale urban parameterization within atmospheric models

    Science.gov (United States)

    Shaffer, S. R.

    2015-12-01

    A method for representing grid-scale heterogeneous development density for urban climate models from probability density functions of sub-grid resolution observed data is proposed. Derived values are evaluated in relation to normalized Shannon Entropy to provide guidance in assessing model input data. Urban fraction for dominant and mosaic urban class contributions are estimated by combining analysis of 30-meter resolution National Land Cover Database 2006 data products for continuous impervious surface area and categorical land cover. The method aims at reducing model error through improvement of urban parameterization and representation of observations employed as input data. The multi-scale variation of parameter values are demonstrated for several methods of utilizing input. The method provides multi-scale and spatial guidance for determining where parameterization schemes may be mis-representing heterogeneity of input data, along with motivation for employing mosaic techniques based upon assessment of input data. The proposed method has wider potential for geographic application, and complements data products which focus on characterizing central business districts. The method enables obtaining urban fraction dependent upon resolution and class partition scheme, based upon improved parameterization of observed data, which provides one means of influencing simulation prediction at various aggregated grid scales.

  10. A priori discretization error metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under

  11. Two Error Models for Calibrating SCARA Robots based on the MDH Model

    Directory of Open Access Journals (Sweden)

    Li Xiaolong

    2017-01-01

    Full Text Available This paper describes the process of using two error models for calibrating Selective Compliance Assembly Robot Arm (SCARA robots based on the modified Denavit-Hartenberg(MDH model, with the aim of improving the robot's accuracy. One of the error models is the position error model, which uses robot position errors with respect to an accurate robot base frame built before the measurement commenced. The other model is the distance error model, which uses only the robot moving distance to calculate errors. Because calibration requires the end-effector to be accurately measured, a laser tracker was used to measure the robot position and distance errors. After calibrating the robot and, the end-effector locations were measured again compensating the error models' parameters obtained from the calibration. The finding is that the robot's accuracy improved greatly after compensating the calibrated parameters.

  12. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The con

  13. Structure and Asymptotic theory for Nonlinear Models with GARCH Errors

    NARCIS (Netherlands)

    F. Chan (Felix); M.J. McAleer (Michael); M.C. Medeiros (Marcelo)

    2011-01-01

    textabstractNonlinear time series models, especially those with regime-switching and conditionally heteroskedastic errors, have become increasingly popular in the economics and finance literature. However, much of the research has concentrated on the empirical applications of various models, with li

  14. Calibrating Car-Following Model Considering Measurement Errors

    Directory of Open Access Journals (Sweden)

    Chang-qiao Shao

    2013-01-01

    Full Text Available Car-following model has important applications in traffic and safety engineering. To enhance the accuracy of model in predicting behavior of individual driver, considerable studies strive to improve the model calibration technologies. However, microscopic car-following models are generally calibrated by using macroscopic traffic data ignoring measurement errors-in-variables that leads to unreliable and erroneous conclusions. This paper aims to develop a technology to calibrate the well-known Van Aerde model. Particularly, the effect of measurement errors-in-variables on the accuracy of estimate is considered. In order to complete calibration of the model using microscopic data, a new parameter estimate method named two-step approach is proposed. The result shows that the modified Van Aerde model to a certain extent is more reliable than the generic model.

  15. Structure and asymptotic theory for nonlinear models with GARCH errors

    Directory of Open Access Journals (Sweden)

    Felix Chan

    2015-01-01

    Full Text Available Nonlinear time series models, especially those with regime-switching and/or conditionally heteroskedastic errors, have become increasingly popular in the economics and finance literature. However, much of the research has concentrated on the empirical applications of various models, with little theoretical or statistical analysis associated with the structure of the processes or the associated asymptotic theory. In this paper, we derive sufficient conditions for strict stationarity and ergodicity of three different specifications of the first-order smooth transition autoregressions with heteroskedastic errors. This is essential, among other reasons, to establish the conditions under which the traditional LM linearity tests based on Taylor expansions are valid. We also provide sufficient conditions for consistency and asymptotic normality of the Quasi-Maximum Likelihood Estimator for a general nonlinear conditional mean model with first-order GARCH errors.

  16. Augmented GNSS differential corrections minimum mean square error estimation sensitivity to spatial correlation modeling errors.

    Science.gov (United States)

    Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco

    2014-06-11

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  17. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  18. Modeling Error in Quantitative Macro-Comparative Research

    Directory of Open Access Journals (Sweden)

    Salvatore J. Babones

    2015-08-01

    Full Text Available Much quantitative macro-comparative research (QMCR relies on a common set of published data sources to answer similar research questions using a limited number of statistical tools. Since all researchers have access to much the same data, one might expect quick convergence of opinion on most topics. In reality, of course, differences of opinion abound and persist. Many of these differences can be traced, implicitly or explicitly, to the different ways researchers choose to model error in their analyses. Much careful attention has been paid in the political science literature to the error structures characteristic of time series cross-sectional (TSCE data, but much less attention has been paid to the modeling of error in broadly cross-national research involving large panels of countries observed at limited numbers of time points. Here, and especially in the sociology literature, multilevel modeling has become a hegemonic – but often poorly understood – research tool. I argue that widely-used types of multilevel models, commonly known as fixed effects models (FEMs and random effects models (REMs, can produce wildly spurious results when applied to trended data due to mis-specification of error. I suggest that in most commonly-encountered scenarios, difference models are more appropriate for use in QMC.

  19. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  20. Improved Systematic Pointing Error Model for the DSN Antennas

    Science.gov (United States)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  1. Stochastic modelling and analysis of IMU sensor errors

    Science.gov (United States)

    Zaho, Y.; Horemuz, M.; Sjöberg, L. E.

    2011-12-01

    The performance of a GPS/INS integration system is greatly determined by the ability of stand-alone INS system to determine position and attitude within GPS outage. The positional and attitude precision degrades rapidly during GPS outage due to INS sensor errors. With advantages of low price and volume, the Micro Electrical Mechanical Sensors (MEMS) have been wildly used in GPS/INS integration. Moreover, standalone MEMS can keep a reasonable positional precision only a few seconds due to systematic and random sensor errors. General stochastic error sources existing in inertial sensors can be modelled as (IEEE STD 647, 2006) Quantization Noise, Random Walk, Bias Instability, Rate Random Walk and Rate Ramp. Here we apply different methods to analyze the stochastic sensor errors, i.e. autoregressive modelling, Gauss-Markov process, Power Spectral Density and Allan Variance. Then the tests on a MEMS based inertial measurement unit were carried out with these methods. The results show that different methods give similar estimates of stochastic error model parameters. These values can be used further in the Kalman filter for better navigation accuracy and in the Doppler frequency estimate for faster acquisition after GPS signal outage.

  2. Statistical model and error analysis of a proposed audio fingerprinting algorithm

    Science.gov (United States)

    McCarthy, E. P.; Balado, F.; Silvestre, G. C. M.; Hurley, N. J.

    2006-01-01

    In this paper we present a statistical analysis of a particular audio fingerprinting method proposed by Haitsma et al.1 Due to the excellent robustness and synchronisation properties of this particular fingerprinting method, we would like to examine its performance for varying values of the parameters involved in the computation and ascertain its capabilities. For this reason, we pursue a statistical model of the fingerprint (also known as a hash, message digest or label). Initially we follow the work of a previous attempt made by Doets and Lagendijk 2-4 to obtain such a statistical model. By reformulating the representation of the fingerprint as a quadratic form, we present a model in which the parameters derived by Doets and Lagendijk may be obtained more easily. Furthermore, our model allows further insight into certain aspects of the behaviour of the fingerprinting algorithm not previously examined. Using our model, we then analyse the probability of error (P e) of the hash. We identify two particular error scenarios and obtain an expression for the probability of error in each case. We present three methods of varying accuracy to approximate P e following Gaussian noise addition to the signal of interest. We then analyse the probability of error following desynchronisation of the signal at the input of the hashing system and provide an approximation to P e for different parameters of the algorithm under varying degrees of desynchronisation.

  3. Application of variance components estimation to calibrate geoid error models.

    Science.gov (United States)

    Guo, Dong-Mei; Xu, Hou-Ze

    2015-01-01

    The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.

  4. Total Sensitivity Index Calculation of Tool Requirement Model via Error Propagation Equation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new and convenient method is presented to calculate the total sensitivity indices defined by variance-based sensitivity analysis. By decomposing the output variance using error propagation equations, this method can transform the "double-loop" sampling procedure into "single-loop" one and obviously reduce the computation cost of analysis. In contrast with Sobol's and Fourier amplitude sensitivity test (FAST) method, which is limited in non-correlated variables, the new approach is suitable for correlated input variables. An application in semiconductor assembling and test manufacturing (ATM) factory indicates that this approach has a good performance in additive model and simple non-additive model.

  5. Analysis of the Model Checkers' Input Languages for Modeling Traffic Light Systems

    Directory of Open Access Journals (Sweden)

    Pathiah A. Samat

    2011-01-01

    Full Text Available Problem statement: Model checking is an automated verification technique that can be used for verifying properties of a system. A number of model checking systems have been developed over the last few years. However, there is no guideline that is available for selecting the most suitable model checker to be used to model a particular system. Approach: In this study, we compare the use of four model checkers: SMV, SPIN, UPPAAL and PRISM for modeling a distributed control system. In particular, we are looking at the capabilities of the input languages of these model checkers for modeling this type of system. Limitations and differences of their input language are compared and analyses by using a set of questions. Results: The result of the study shows that although the input languages of these model checkers have a lot of similarities, they also have a significant number of differences. The result of the study also shows that one model checker may be more suitable than others for verifying this type of systems Conclusion: User need to choose the right model checker for the problem to be verified.

  6. INPUT MODELLING USING STATISTICAL DISTRIBUTIONS AND ARENA SOFTWARE

    Directory of Open Access Journals (Sweden)

    Elena Iuliana GINGU (BOTEANU

    2015-05-01

    Full Text Available The paper presents a method of choosing properly the probability distributions for failure time in a flexible manufacturing system. Several well-known distributions often provide good approximation in practice. The commonly used continuous distributions are: Uniform, Triangular, Beta, Normal, Lognormal, Weibull, and Exponential. In this article is studied how to use the Input Analyzer in the simulation language Arena to fit probability distributions to data, or to evaluate how well a particular distribution. The objective was to provide the selection of the most appropriate statistical distributions and to estimate parameter values of failure times for each machine of a real manufacturing line.

  7. Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2011-01-01

    In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....

  8. Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2011-01-01

    In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....

  9. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    Science.gov (United States)

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  10. Performance assessment of nitrate leaching models for highly vulnerable soils used in low-input farming based on lysimeter data.

    Science.gov (United States)

    Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco

    2014-11-15

    The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all

  11. Accelerating Monte Carlo Markov chains with proxy and error models

    Science.gov (United States)

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  12. Thermal Error Modelling of the Spindle Using Neurofuzzy Systems

    Directory of Open Access Journals (Sweden)

    Jingan Feng

    2016-01-01

    Full Text Available This paper proposes a new combined model to predict the spindle deformation, which combines the grey models and the ANFIS (adaptive neurofuzzy inference system model. The grey models are used to preprocess the original data, and the ANFIS model is used to adjust the combined model. The outputs of the grey models are used as the inputs of the ANFIS model to train the model. To evaluate the performance of the combined model, an experiment is implemented. Three Pt100 thermal resistances are used to monitor the spindle temperature and an inductive current sensor is used to obtain the spindle deformation. The experimental results display that the combined model can better predict the spindle deformation compared to BP network, and it can greatly improve the performance of the spindle.

  13. The propagation of inventory-based positional errors into statistical landslide susceptibility models

    Science.gov (United States)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas

    2016-12-01

    systematic comparisons of 12 models provided valuable evidence that the respective error-propagation was not only determined by the degree of positional inaccuracy inherent in the landslide data, but also by the spatial representation of landslides and the environment, landslide magnitude, the characteristics of the study area, the selected classification method and an interplay of predictors within multiple variable models. Based on the results, we deduced that a direct propagation of minor to moderate inventory-based positional errors into modelling results can be partly counteracted by adapting the modelling design (e.g. generalization of input data, opting for strongly generalizing classifiers). Since positional errors within landslide inventories are common and subsequent modelling and validation results are likely to be distorted, the potential existence of inventory-based positional inaccuracies should always be considered when assessing landslide susceptibility by means of empirical models.

  14. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  15. Bayesian network models for error detection in radiotherapy plans.

    Science.gov (United States)

    Kalet, Alan M; Gennari, John H; Ford, Eric C; Phillips, Mark H

    2015-04-07

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network's conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.

  16. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    NARCIS (Netherlands)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients t

  17. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    Science.gov (United States)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  18. Input Response of Neural Network Model with Lognormally Distributed Synaptic Weights

    Science.gov (United States)

    Nagano, Yoshihiro; Karakida, Ryo; Watanabe, Norifumi; Aoyama, Atsushi; Okada, Masato

    2016-07-01

    Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.

  19. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  20. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  1. Effects of input discretization, model complexity, and calibration strategy on model performance in a data-scarce glacierized catchment in Central Asia

    Science.gov (United States)

    Tarasova, L.; Knoche, M.; Dietrich, J.; Merz, R.

    2016-06-01

    Glacierized high-mountainous catchments are often the water towers for downstream region, and modeling these remote areas are often the only available tool for the assessment of water resources availability. Nevertheless, data scarcity affects different aspects of hydrological modeling in such mountainous glacierized basins. On the example of poorly gauged glacierized catchment in Central Asia, we examined the effects of input discretization, model complexity, and calibration strategy on model performance. The study was conducted with the GSM-Socont model driven with climatic input from the corrected High Asia Reanalysis data set of two different discretizations. We analyze the effects of the use of long-term glacier volume loss, snow cover images, and interior runoff as an additional calibration data. In glacierized catchments with winter accumulation type, where the transformation of precipitation into runoff is mainly controlled by snow and glacier melt processes, the spatial discretization of precipitation tends to have less impact on simulated runoff than a correct prediction of the integral precipitation volume. Increasing model complexity by using spatially distributed input or semidistributed parameters values does not increase model performance in the Gunt catchment, as the more complex model tends to be more sensitive to errors in the input data set. In our case, better model performance and quantification of the flow components can be achieved by additional calibration data, rather than by using a more distributed model parameters. However, a semidistributed model better predicts the spatial patterns of snow accumulation and provides more plausible runoff predictions at the interior sites.

  2. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  3. Multivariate DCC-GARCH Model: -With Various Error Distributions

    OpenAIRE

    Orskaug, Elisabeth

    2009-01-01

    In this thesis we have studied the DCC-GARCH model with Gaussian, Student's $t$ and skew Student's t-distributed errors. For a basic understanding of the GARCH model, the univariate GARCH and multivariate GARCH models in general were discussed before the DCC-GARCH model was considered. The Maximum likelihood method is used to estimate the parameters. The estimation of the correctly specified likelihood is difficult, and hence the DCC-model was designed to allow for two stage estim...

  4. On the Influence of Input Data Quality to Flood Damage Estimation: The Performance of the INSYDE Model

    Directory of Open Access Journals (Sweden)

    Daniela Molinari

    2017-09-01

    Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.

  5. Error Assessment in Modeling with Fractal Brownian Motions

    CERN Document Server

    Qiao, Bingqiang

    2013-01-01

    To model a given time series $F(t)$ with fractal Brownian motions (fBms), it is necessary to have appropriate error assessment for related quantities. Usually the fractal dimension $D$ is derived from the Hurst exponent $H$ via the relation $D=2-H$, and the Hurst exponent can be evaluated by analyzing the dependence of the rescaled range $\\langle|F(t+\\tau)-F(t)|\\rangle$ on the time span $\\tau$. For fBms, the error of the rescaled range not only depends on data sampling but also varies with $H$ due to the presence of long term memory. This error for a given time series then can not be assessed without knowing the fractal dimension. We carry out extensive numerical simulations to explore the error of rescaled range of fBms and find that for $0error of $\\langle|F(t+\\tau)-F(t)|\\rangle$. The e...

  6. Meteorological input for atmospheric dispersion models: an inter-comparison between new generation models

    Energy Technology Data Exchange (ETDEWEB)

    Busillo, C.; Calastrini, F.; Gualtieri, G. [Lab. for Meteorol. and Environ. Modell. (LaMMA/CNR-IBIMET), Florence (Italy); Carpentieri, M.; Corti, A. [Dept. of Energetics, Univ. of Florence (Italy); Canepa, E. [INFM, Dept. of Physics, Univ. of Genoa (Italy)

    2004-07-01

    The behaviour of atmospheric dispersion models is strongly influenced by meteorological input, especially as far as new generation models are concerned. More sophisticated meteorological pre-processors require more extended and more reliable data. This is true in particular when short-term simulations are performed, while in long-term modelling detailed data are less important. In Europe no meteorological standards exist about data, therefore testing and evaluating the results of new generation dispersion models is particularly important in order to obtain information on reliability of model predictions. (orig.)

  7. An Emprical Point Error Model for Tls Derived Point Clouds

    Science.gov (United States)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  8. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik;

    2015-01-01

    provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...

  9. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Sanguo

    2002-01-01

    [1]Kendall, M. G., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.[2]Fuller, W. A., Measurement Error Models, New York: Wiley, 1987.[3]Carroll, R. J., Ruppert D., Stefanski, L. A., Measurement Error in Nonlinear Models, London: Chapman & Hall, 1995.[4]Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974,154.[5]Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975, 272.[6]Zhang, S. G., Chen, X. R., Consistency of modified MLE in EV model with replicated observation, Science in China, Ser. A, 2001, 44(3): 304-310.[7]Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343-362.

  10. A Model for Geometry-Dependent Errors in Length Artifacts.

    Science.gov (United States)

    Sawyer, Daniel; Parry, Brian; Phillips, Steven; Blackburn, Chris; Muralikrishnan, Bala

    2012-01-01

    We present a detailed model of dimensional changes in long length artifacts, such as step gauges and ball bars, due to bending under gravity. The comprehensive model is based on evaluation of the gauge points relative to the neutral bending surface. It yields the errors observed when the gauge points are located off the neutral bending surface of a bar or rod but also reveals the significant error associated with out-of-straightness of a bar or rod even if the gauge points are located in the neutral bending surface. For example, one experimental result shows a length change of greater than 1.5 µm on a 1 m ball bar with an out-of-straightness of 0.4 mm. This and other results are in agreement with the model presented in this paper.

  11. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  12. Modeling the BOD of Danube River in Serbia using spatial, temporal, and input variables optimized artificial neural network models.

    Science.gov (United States)

    Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V

    2016-05-01

    This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.

  13. Approximation error in PDE-based modelling of vehicular platoons

    Science.gov (United States)

    Hao, He; Barooah, Prabir

    2012-08-01

    We study the problem of how much error is introduced in approximating the dynamics of a large vehicular platoon by using a partial differential equation, as was done in Barooah, Mehta, and Hespanha [Barooah, P., Mehta, P.G., and Hespanha, J.P. (2009), 'Mistuning-based Decentralised Control of Vehicular Platoons for Improved Closed Loop Stability', IEEE Transactions on Automatic Control, 54, 2100-2113], Hao, Barooah, and Mehta [Hao, H., Barooah, P., and Mehta, P.G. (2011), 'Stability Margin Scaling Laws of Distributed Formation Control as a Function of Network Structure', IEEE Transactions on Automatic Control, 56, 923-929]. In particular, we examine the difference between the stability margins of the coupled-ordinary differential equations (ODE) model and its partial differential equation (PDE) approximation, which we call the approximation error. The stability margin is defined as the absolute value of the real part of the least stable pole. The PDE model has proved useful in the design of distributed control schemes (Barooah et al. 2009; Hao et al. 2011); it provides insight into the effect of gains of local controllers on the closed-loop stability margin that is lacking in the coupled-ODE model. Here we show that the ratio of the approximation error to the stability margin is O(1/N), where N is the number of vehicles. Thus, the PDE model is an accurate approximation of the coupled-ODE model when N is large. Numerical computations are provided to corroborate the analysis.

  14. Translation of CODEV Lens Model To IGES Input File

    Science.gov (United States)

    Wise, T. D.; Carlin, B. B.

    1986-10-01

    The design of modern optical systems is not a trivial task; even more difficult is the requirement for an opticker to accurately describe the physical constraints implicit in his design so that a mechanical designer can correctly mount the optical elements. Typical concerns include setback of baffles, obstruction of clear apertures by mounting hardware, location of the image plane with respect to fiducial marks, and the correct interpretation of systems having odd geometry. The presence of multiple coordinate systems (optical, mechan-ical, system test, and spacecraft) only exacerbates an already difficult situation. A number of successful optical design programs, such as CODEV (1), have come into existence over the years while the development of Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) has allowed a number of firms to install "paperless" design systems. In such a system, a part which is entered by keyboard, or pallet, is made into a real physical piece on a milling machine which has received its instructions from the design system. However, a persistent problem is the lack of a link between the optical design programs and the mechanical CAD programs. This paper will describe a first step which has been taken to bridge this gap. Starting with the neutral plot file generated by the CODEV optical design program, we have been able to produce a file suitable for input to the ANVIL (2) and GEOMOD (3) software packages, using the International Graphics Exchange Standard (IGES) interface. This is accomplished by software of our design, which runs on a VAX (4) system. A description of the steps to be taken in transferring a design will be provided. We shall also provide some examples of designs on which this technique has been used successfully. Finally, we shall discuss limitations of the existing software and suggest some improvements which might be undertaken.

  15. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  16. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  17. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Directory of Open Access Journals (Sweden)

    Claudimar Pereira da Veiga

    2012-08-01

    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  18. Identifying errors in dust models from data assimilation.

    Science.gov (United States)

    Pope, R J; Marsham, J H; Knippertz, P; Brooks, M E; Roberts, A J

    2016-09-16

    Airborne mineral dust is an important component of the Earth system and is increasingly predicted prognostically in weather and climate models. The recent development of data assimilation for remotely sensed aerosol optical depths (AODs) into models offers a new opportunity to better understand the characteristics and sources of model error. Here we examine assimilation increments from Moderate Resolution Imaging Spectroradiometer AODs over northern Africa in the Met Office global forecast model. The model underpredicts (overpredicts) dust in light (strong) winds, consistent with (submesoscale) mesoscale processes lifting dust in reality but being missed by the model. Dust is overpredicted in the Sahara and underpredicted in the Sahel. Using observations of lighting and rain, we show that haboobs (cold pool outflows from moist convection) are an important dust source in reality but are badly handled by the model's convection scheme. The approach shows promise to serve as a useful framework for future model development.

  19. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  20. On Inertial Body Tracking in the Presence of Model Calibration Errors.

    Science.gov (United States)

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-07-22

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and

  1. Evapotranspiration Input Data for the Central Valley Hydrologic Model (CVHM)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This digital dataset contains monthly reference evapotranspiration (ETo) data for the Central Valley Hydrologic Model (CVHM). The Central Valley encompasses an...

  2. Using Crowd Sensed Data as Input to Congestion Model

    DEFF Research Database (Denmark)

    Lehmann, Anders; Gross, Allan

    2016-01-01

    . To get accurate and timely information on traffic congestion, and by extension information on air pollution, near real time traffic models are needed. We present in this paper an implementation of the Restricted Stochastic User equilibrium model, that is capable to model congestions for very large Urban......Emission of airborne pollutants and climate gasses from the transport sector is a growing problem, both in indus- trialised and developing countries. Planning of urban transport system is essential to minimise the environmental, health and economic impact of congestion in the transport system...... traffic systems, in less than an hour. The model is implemented in an open source database system, for easy interface with GIS resources and crowd sensed transportation data....

  3. Input-dependent wave attenuation in a critically-balanced model of cortex.

    Directory of Open Access Journals (Sweden)

    Xiao-Hu Yan

    Full Text Available A number of studies have suggested that many properties of brain activity can be understood in terms of critical systems. However it is still not known how the long-range susceptibilities characteristic of criticality arise in the living brain from its local connectivity structures. Here we prove that a dynamically critically-poised model of cortex acquires an infinitely-long ranged susceptibility in the absence of input. When an input is presented, the susceptibility attenuates exponentially as a function of distance, with an increasing spatial attenuation constant (i.e., decreasing range the larger the input. This is in direct agreement with recent results that show that waves of local field potential activity evoked by single spikes in primary visual cortex of cat and macaque attenuate with a characteristic length that also increases with decreasing contrast of the visual stimulus. A susceptibility that changes spatial range with input strength can be thought to implement an input-dependent spatial integration: when the input is large, no additional evidence is needed in addition to the local input; when the input is weak, evidence needs to be integrated over a larger spatial domain to achieve a decision. Such input-strength-dependent strategies have been demonstrated in visual processing. Our results suggest that input-strength dependent spatial integration may be a natural feature of a critically-balanced cortical network.

  4. High Flux Isotope Reactor system RELAP5 input model

    Energy Technology Data Exchange (ETDEWEB)

    Morris, D.G.; Wendel, M.W.

    1993-01-01

    A thermal-hydraulic computational model of the High Flux Isotope Reactor (HFIR) has been developed using the RELAP5 program. The purpose of the model is to provide a state-of-the art thermal-hydraulic simulation tool for analyzing selected hypothetical accident scenarios for a revised HFIR Safety Analysis Report (SAR). The model includes (1) a detailed representation of the reactor core and other vessel components, (2) three heat exchanger/pump cells, (3) pressurizing pumps and letdown valves, and (4) secondary coolant system (with less detail than the primary system). Data from HFIR operation, component tests, tests in facility mockups and the HFIR, HFIR specific experiments, and other pertinent experiments performed independent of HFIR were used to construct the model and validate it to the extent permitted by the data. The detailed version of the model has been used to simulate loss-of-coolant accidents (LOCAs), while the abbreviated version has been developed for the operational transients that allow use of a less detailed nodalization. Analysis of station blackout with core long-term decay heat removal via natural convection has been performed using the core and vessel portions of the detailed model.

  5. Regional input-output models and the treatment of imports in the European System of Accounts

    OpenAIRE

    Kronenberg, Tobias

    2011-01-01

    Input-output models are often used in regional science due to their versatility and their ability to capture many of the distinguishing features of a regional economy. Input-output tables are available for all EU member countries, but they are hard to find at the regional level, since many regional governments lack the resources or the will to produce reliable, survey-based regional input-output tables. Therefore, in many cases researchers adopt nonsurvey techniques to derive regional input-o...

  6. Large uncertainty in soil carbon modelling related to carbon input calculation method

    DEFF Research Database (Denmark)

    Keel, Sonja; Leifeld, Jens; Mayer, Jochen

    2017-01-01

    The application of dynamic models to report changes in soil organic carbon (SOC) stocks, for example as part of greenhouse gas inventories, is becoming increasingly important. Most of these models rely on input data from harvest residues or decaying plant parts and also organic fertilizer, together...... referred to as soil carbon inputs (C). The soil C inputs from plants are derived from measured agricultural yields using allometric equations. Here we compared the results of five previously published equations. Our goal was to test whether the choice of method is critical for modelling soil C and if so......, which of these equations is most suitable for Swiss conditions. For this purpose we used the five equations to calculate soil C inputs based on yield data from a Swiss long-term cropping experiment. Estimated annual soil C inputs from various crops were averaged from 28 years and four fertilizer...

  7. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...

  8. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Kathleen T. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); McAvoy, Thomas J. [Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); Department of Chemical and Biomolecular Engineering and Institute of Systems Research, University of Maryland, College Park, MD (United States); George, Rohini [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Dieterich, Sonja [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); D' Souza, Warren D., E-mail: wdsou001@umaryland.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States)

    2012-04-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precision in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.

  9. TESTING OF CORRELATION AND HETEROSCEDASTICITY IN NONLINEAR REGRESSION MODELS WITH DBL(p,q,1) RANDOM ERRORS

    Institute of Scientific and Technical Information of China (English)

    Liu Yingan; Wei Bocheng

    2008-01-01

    Chaos theory has taught us that a system which has both nonlinearity and random input will most likely produce irregular data. If random errors are irregular data, then random error process will raise nonlinearity (Kantz and Schreiber (1997)). Tsai (1986) introduced a composite test for autocorrelation and heteroscedasticity in linear models with AR(1) errors. Liu (2003) introduced a composite test for correlation and heteroscedasticity in nonlinear models with DBL(p, 0, 1) errors. Therefore, the important problems in regres- sion model are detections of bilinearity, correlation and heteroscedasticity. In this article, the authors discuss more general case of nonlinear models with DBL(p, q, 1) random errors by score test. Several statistics for the test of bilinearity, correlation, and heteroscedas-ticity are obtained, and expressed in simple matrix formulas. The results of regression models with linear errors are extended to those with bilinear errors. The simulation study is carried out to investigate the powers of the test statistics. All results of this article extend and develop results of Tsai (1986), Wei, et al (1995), and Liu, et al (2003).

  10. Scientific and technical advisory committee review of the nutrient inputs to the watershed model

    Science.gov (United States)

    The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...

  11. From LCC to LCA Using a Hybrid Input Output Model – A Maritime Case Study

    DEFF Research Database (Denmark)

    Kjær, Louise Laumann; Pagoropoulos, Aris; Hauschild, Michael Zwicky;

    2015-01-01

    As companies try to embrace life cycle thinking, Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) have proven to be powerful tools. In this paper, an Environmental Input-Output model is used for analysis as it enables an LCA using the same economic input data as LCC. This approach helps...

  12. Error field and magnetic diagnostic modeling for W7-X

    Energy Technology Data Exchange (ETDEWEB)

    Lazerson, Sam A. [PPPL; Gates, David A. [PPPL; NEILSON, GEORGE H. [PPPL; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

    2014-07-01

    The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (Β = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

  13. User requirements for hydrological models with remote sensing input

    Energy Technology Data Exchange (ETDEWEB)

    Kolberg, Sjur

    1997-10-01

    Monitoring the seasonal snow cover is important for several purposes. This report describes user requirements for hydrological models utilizing remotely sensed snow data. The information is mainly provided by operational users through a questionnaire. The report is primarily intended as a basis for other work packages within the Snow Tools project which aim at developing new remote sensing products for use in hydrological models. The HBV model is the only model mentioned by users in the questionnaire. It is widely used in Northern Scandinavia and Finland, in the fields of hydroelectric power production, flood forecasting and general monitoring of water resources. The current implementation of HBV is not based on remotely sensed data. Even the presently used HBV implementation may benefit from remotely sensed data. However, several improvements can be made to hydrological models to include remotely sensed snow data. Among these the most important are a distributed version, a more physical approach to the snow depletion curve, and a way to combine data from several sources. 1 ref.

  14. Errors Made by Elementary Fourth Grade Students When Modelling Word Problems and the Elimination of Those Errors through Scaffolding

    Science.gov (United States)

    Ulu, Mustafa

    2017-01-01

    This study aims to identify errors made by primary school students when modelling word problems and to eliminate those errors through scaffolding. A 10-question problem-solving achievement test was used in the research. The qualitative and quantitative designs were utilized together. The study group of the quantitative design comprises 248…

  15. Influence of model errors in optimal sensor placement

    Science.gov (United States)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  16. Correction of placement error in EBL using model based method

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-10-01

    The main source of placement error in maskmaking using electron beam is charging. DISPLACE software provides a method to correct placement errors for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. Unknown physical parameters such as fogging can be found from calibration experiments. A test layout on a single calibration mask was used to calibrate physical parameters used in the correction model. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE, and the mask was fabricated and measured. A good correlation of the measured and predicted values of the correction all over the mask with the complex pattern confirmed the high accuracy of the charging placement error correction.

  17. Modeling L1-GPS Errors for an Enhanced Data Fusion with Lane Marking Maps for Road Automated Vehicles

    OpenAIRE

    2015-01-01

    International audience; This paper describes a method which models the time correlation errors of a standalone L1-GPS receiver by integrating front-view camera measurements map-matched with a lane marking map. An identification method of the parameters of the shaping model is presented and evaluated with real data. The observability of the augmented state vector is demonstrated according to an algebraic definition. A positioning solver based on extended Kalman filtering with measured input is...

  18. Tracking cellular telephones as an input for developing transport models

    CSIR Research Space (South Africa)

    Cooper, Antony K

    2010-08-01

    Full Text Available of tracking cellular telephones and using the data to populate transport and other models. We report here on one of the pilots, known as DYNATRACK (Dynamic Daily Path Tracking), a larger experiment conducted in 2007 with a more heterogeneous group of commuters...

  19. Physics input for modelling superfluid neutron stars with hyperon cores

    CERN Document Server

    Gusakov, M E; Kantor, E M

    2014-01-01

    Observations of massive ($M \\approx 2.0~M_\\odot$) neutron stars (NSs), PSRs J1614-2230 and J0348+0432, rule out most of the models of nucleon-hyperon matter employed in NS simulations. Here we construct three possible models of nucleon-hyperon matter consistent with the existence of $2~M_\\odot$ pulsars as well as with semi-empirical nuclear matter parameters at saturation, and semi-empirical hypernuclear data. Our aim is to calculate for these models all the parameters necessary for modelling dynamics of hyperon stars (such as equation of state, adiabatic indices, thermodynamic derivatives, relativistic entrainment matrix, etc.), making them available for a potential user. To this aim a general non-linear hadronic Lagrangian involving $\\sigma\\omega\\rho\\phi\\sigma^\\ast$ meson fields, as well as quartic terms in vector-meson fields, is considered. A universal scheme for calculation of the $\\ell=0,1$ Landau Fermi-liquid parameters and relativistic entrainment matrix is formulated in the mean-field approximation. ...

  20. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  1. Human task animation from performance models and natural language input

    Science.gov (United States)

    Esakov, Jeffrey; Badler, Norman I.; Jung, Moon

    1989-01-01

    Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specification are animated on a Silicon Graphics Iris workstation in real-time.

  2. Tumor Growth Model with PK Input for Neuroblastoma Drug Development

    Science.gov (United States)

    2015-09-01

    9/2012 - 4/30/2017 2.40 calendar NCI Anticancer Drug Pharmacology in Very Young Children The proposed studies will use pharmacokinetic... anticancer drugs . DOD W81XWH-14-1-0103 CA130396 (Stewart) 9/1/2014 - 8/31/2016 .60 calendar DOD-DEPARTMENT OF THE ARMY Tumor Growth Model with PK... anticancer drugs . .60 calendar V Foundation Translational (Stewart) 11/1/2012-10/31/2015 THE V FDN FOR CA RES Identification & preclinical testing

  3. Influence of input matrix representation on topic modelling performance

    CSIR Research Space (South Africa)

    De Waal, A

    2010-11-01

    Full Text Available model, perplexity is an appropriate measure. It provides an indication of the model’s ability to generalise by measuring the exponent of the mean log-likelihood of words in a held-out test set of the corpus. The exploratory abilities of the latent.... The phrases are clearly more intelligible than only single word phrases in many cases, thus demonstrating the qualitative advantage of the proposed method. 1For the CRAN corpus, each subset of chunks includes the top 1000 chunks with the highest...

  4. How sensitive are estimates of carbon fixation in agricultural models to input data?

    Directory of Open Access Journals (Sweden)

    Tum Markus

    2012-02-01

    Full Text Available Abstract Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF. Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC and the Biosphere Energy Transfer Hydrology (BETHY/DLR model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison.

  5. How sensitive are estimates of carbon fixation in agricultural models to input data?

    Science.gov (United States)

    Tum, Markus; Strauss, Franziska; McCallum, Ian; Günther, Kurt; Schmid, Erwin

    2012-02-01

    Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison.

  6. Modelling application for cognitive reliability and error analysis method

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2013-10-01

    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  7. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  8. Error modelling and experimental validation of a planar 3-PPR parallel manipulator with joint clearances

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2012-01-01

    This paper deals with the error modelling and analysis of a 3-PPR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Usi...... this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements and show the effectiveness of the error prediction model....

  9. An integrated model for the assessment of global water resources – Part 1: Model description and input meteorological forcing

    Directory of Open Access Journals (Sweden)

    N. Hanasaki

    2008-07-01

    Full Text Available To assess global water availability and use at a subannual timescale, an integrated global water resources model was developed consisting of six modules: land surface hydrology, river routing, crop growth, reservoir operation, environmental flow requirement estimation, and anthropogenic water withdrawal. The model simulates both natural and anthropogenic water flow globally (excluding Antarctica on a daily basis at a spatial resolution of 1°×1° (longitude and latitude. This first part of the two-feature report describes the six modules and the input meteorological forcing. The input meteorological forcing was provided by the second Global Soil Wetness Project (GSWP2, an international land surface modeling project. Several reported shortcomings of the forcing component were improved. The land surface hydrology module was developed based on a bucket type model that simulates energy and water balance on land surfaces. The crop growth module is a relatively simple model based on concepts of heat unit theory, potential biomass, and a harvest index. In the reservoir operation module, 452 major reservoirs with >1 km3 each of storage capacity store and release water according to their own rules of operation. Operating rules were determined for each reservoir by an algorithm that used currently available global data such as reservoir storage capacity, intended purposes, simulated inflow, and water demand in the lower reaches. The environmental flow requirement module was newly developed based on case studies from around the world. Simulated runoff was compared and validated with observation-based global runoff data sets and observed streamflow records at 32 major river gauging stations around the world. Mean annual runoff agreed well with earlier studies at global and continental scales, and in individual basins, the mean bias was less than ±20% in 14 of the 32 river basins and less than ±50% in 24 basins. The error in the peak was less

  10. An integrated model for the assessment of global water resources Part 1: Model description and input meteorological forcing

    Science.gov (United States)

    Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shirakawa, N.; Shen, Y.; Tanaka, K.

    2008-07-01

    To assess global water availability and use at a subannual timescale, an integrated global water resources model was developed consisting of six modules: land surface hydrology, river routing, crop growth, reservoir operation, environmental flow requirement estimation, and anthropogenic water withdrawal. The model simulates both natural and anthropogenic water flow globally (excluding Antarctica) on a daily basis at a spatial resolution of 1°×1° (longitude and latitude). This first part of the two-feature report describes the six modules and the input meteorological forcing. The input meteorological forcing was provided by the second Global Soil Wetness Project (GSWP2), an international land surface modeling project. Several reported shortcomings of the forcing component were improved. The land surface hydrology module was developed based on a bucket type model that simulates energy and water balance on land surfaces. The crop growth module is a relatively simple model based on concepts of heat unit theory, potential biomass, and a harvest index. In the reservoir operation module, 452 major reservoirs with >1 km3 each of storage capacity store and release water according to their own rules of operation. Operating rules were determined for each reservoir by an algorithm that used currently available global data such as reservoir storage capacity, intended purposes, simulated inflow, and water demand in the lower reaches. The environmental flow requirement module was newly developed based on case studies from around the world. Simulated runoff was compared and validated with observation-based global runoff data sets and observed streamflow records at 32 major river gauging stations around the world. Mean annual runoff agreed well with earlier studies at global and continental scales, and in individual basins, the mean bias was less than ±20% in 14 of the 32 river basins and less than ±50% in 24 basins. The error in the peak was less than ±1 mo in 19 of the 27

  11. Robust Quantum Error Correction via Convex Optimization

    CERN Document Server

    Kosut, R L; Lidar, D A

    2007-01-01

    Quantum error correction procedures have traditionally been developed for specific error models, and are not robust against uncertainty in the errors. Using a semidefinite program optimization approach we find high fidelity quantum error correction procedures which present robust encoding and recovery effective against significant uncertainty in the error system. We present numerical examples for 3, 5, and 7-qubit codes. Our approach requires as input a description of the error channel, which can be provided via quantum process tomography.

  12. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    Science.gov (United States)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  13. Researches on the Model of Telecommunication Service with Variable Input Tariff Rates

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The paper sets up and studies the model of the telecommunication queue servicing system with variable input tariff rates, which can relieve the crowding system traffic flows during the busy hour to enhance the utilizing rate of the telecom's resources.

  14. Statistical selection of multiple-input multiple-output nonlinear dynamic models of spike train transformation.

    Science.gov (United States)

    Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; Hampson, Robert E; Deadwyler, Sam A; Berger, Theodore W

    2007-01-01

    Multiple-input multiple-output nonlinear dynamic model of spike train to spike train transformations was previously formulated for hippocampal-cortical prostheses. This paper further described the statistical methods of selecting significant inputs (self-terms) and interactions between inputs (cross-terms) of this Volterra kernel-based model. In our approach, model structure was determined by progressively adding self-terms and cross-terms using a forward stepwise model selection technique. Model coefficients were then pruned based on Wald test. Results showed that the reduced kernel models, which contained much fewer coefficients than the full Volterra kernel model, gave good fits to the novel data. These models could be used to analyze the functional interactions between neurons during behavior.

  15. Analysis and Correction of Systematic Height Model Errors

    Science.gov (United States)

    Jacobsen, K.

    2016-06-01

    The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

  16. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  17. Using Laser Scanners to Augment the Systematic Error Pointing Model

    Science.gov (United States)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  18. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    Science.gov (United States)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  19. Modeling Approach of Regression Orthogonal Experiment Design for Thermal Error Compensation of CNC Turning Center

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The thermal induced errors can account for as much as 70% of the dimensional errors on a workpiece. Accurate modeling of errors is an essential part of error compensation. Base on analyzing the existing approaches of the thermal error modeling for machine tools, a new approach of regression orthogonal design is proposed, which combines the statistic theory with machine structures, surrounding condition, engineering judgements, and experience in modeling. A whole computation and analysis procedure is given. ...

  20. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  1. Multi-bump solutions in a neural field model with external inputs

    Science.gov (United States)

    Ferreira, Flora; Erlhagen, Wolfram; Bicho, Estela

    2016-07-01

    We study the conditions for the formation of multiple regions of high activity or "bumps" in a one-dimensional, homogeneous neural field with localized inputs. Stable multi-bump solutions of the integro-differential equation have been proposed as a model of a neural population representation of remembered external stimuli. We apply a class of oscillatory coupling functions and first derive criteria to the input width and distance, which relate to the synaptic couplings that guarantee the existence and stability of one and two regions of high activity. These input-induced patterns are attracted by the corresponding stable one-bump and two-bump solutions when the input is removed. We then extend our analytical and numerical investigation to N-bump solutions showing that the constraints on the input shape derived for the two-bump case can be exploited to generate a memory of N > 2 localized inputs. We discuss the pattern formation process when either the conditions on the input shape are violated or when the spatial ranges of the excitatory and inhibitory connections are changed. An important aspect for applications is that the theoretical findings allow us to determine for a given coupling function the maximum number of localized inputs that can be stored in a given finite interval.

  2. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  3. Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling

    DEFF Research Database (Denmark)

    Marinello, F.; Voltan, A.; Savio, E.

    2010-01-01

    This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...

  4. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    Science.gov (United States)

    Berry, Tyrus; Harlim, John

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  5. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  6. Error estimates for the Skyrme-Hartree-Fock model

    CERN Document Server

    Erler, J

    2014-01-01

    There are many complementing strategies to estimate the extrapolation errors of a model which was calibrated in least-squares fits. We consider the Skyrme-Hartree-Fock model for nuclear structure and dynamics and exemplify the following five strategies: uncertainties from statistical analysis, covariances between observables, trends of residuals, variation of fit data, dedicated variation of model parameters. This gives useful insight into the impact of the key fit data as they are: binding energies, charge r.m.s. radii, and charge formfactor. Amongst others, we check in particular the predictive value for observables in the stable nucleus $^{208}$Pb, the super-heavy element $^{266}$Hs, $r$-process nuclei, and neutron stars.

  7. Modelling Soft Error Probability in Firmware: A Case Study

    Directory of Open Access Journals (Sweden)

    DG Kourie

    2012-06-01

    Full Text Available This case study involves an analysis of firmware that controls explosions in mining operations. The purpose is to estimate the probability that external disruptive events (such as electro-magnetic interference could drive the firmware into a state which results in an unintended explosion. Two probabilistic models are built, based on two possible types of disruptive events: a single spike of interference, and a burst of multiple spikes of interference.The models suggest that the system conforms to the IEC 61508 Safety Integrity Levels, even under very conservative assumptions of operation.The case study serves as a platform for future researchers to build on when probabilistic modelling soft errors in other contexts.

  8. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  9. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Science.gov (United States)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  10. Input-to-output transformation in a model of the rat hippocampal CA1 network

    OpenAIRE

    Olypher, Andrey V; Lytton, William W; Prinz, Astrid A.

    2012-01-01

    Here we use computational modeling to gain new insights into the transformation of inputs in hippocampal field CA1. We considered input-output transformation in CA1 principal cells of the rat hippocampus, with activity synchronized by population gamma oscillations. Prior experiments have shown that such synchronization is especially strong for cells within one millimeter of each other. We therefore simulated a one-millimeter patch of CA1 with 23,500 principal cells. We used morphologically an...

  11. Regional Input Output Models and the FLQ Formula: A Case Study of Finland

    OpenAIRE

    Tony Flegg; Paul White

    2008-01-01

    This paper examines the use of location quotients (LQs) in constructing regional input-output models. Its focus is on the augmented FLQ formula (AFLQ) proposed by Flegg and Webber, 2000, which takes regional specialization explicitly into account. In our case study, we examine data for 20 Finnish regions, ranging in size from very small to very large, in order to assess the relative performance of the AFLQ formula in estimating regional imports, total intermediate inputs and output multiplier...

  12. Interregional spillovers in Spain: an estimation using an interregional input-output model

    OpenAIRE

    Llano, Carlos

    2009-01-01

    In this note we introduce the 1995 Spanish Interregional Input-Output Model, which was estimated using a wide set of One-region input-output tables and interregional trade matrices, estimated for each sector using interregional transport flows. Based on this framework, and by means of the Hypothetical Regional Extraction Method, the interregional backward and feedback effects are computed, capturing the pull effect of every region over the rest of Spain, through their sectoral relations withi...

  13. Uncertainty and error in complex plasma chemistry models

    Science.gov (United States)

    Turner, Miles M.

    2015-06-01

    Chemistry models that include dozens of species and hundreds to thousands of reactions are common in low-temperature plasma physics. The rate constants used in such models are uncertain, because they are obtained from some combination of experiments and approximate theories. Since the predictions of these models are a function of the rate constants, these predictions must also be uncertain. However, systematic investigations of the influence of uncertain rate constants on model predictions are rare to non-existent. In this work we examine a particular chemistry model, for helium-oxygen plasmas. This chemistry is of topical interest because of its relevance to biomedical applications of atmospheric pressure plasmas. We trace the primary sources for every rate constant in the model, and hence associate an error bar (or equivalently, an uncertainty) with each. We then use a Monte Carlo procedure to quantify the uncertainty in predicted plasma species densities caused by the uncertainty in the rate constants. Under the conditions investigated, the range of uncertainty in most species densities is a factor of two to five. However, the uncertainty can vary strongly for different species, over time, and with other plasma conditions. There are extreme (pathological) cases where the uncertainty is more than a factor of ten. One should therefore be cautious in drawing any conclusion from plasma chemistry modelling, without first ensuring that the conclusion in question survives an examination of the related uncertainty.

  14. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... of the process in terms of stochastic and deter- ministic trends as well as stationary components. In particular, the behaviour of the cointegrating relations is described in terms of geo- metric ergodicity. Despite the fact that no deterministic terms are included, the process will have both stochastic trends...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  15. Econometric Model Estimation and Sensitivity Analysis of Inputs for Mandarin Production in Mazandaran Province of Iran

    Directory of Open Access Journals (Sweden)

    Majid Namdari

    2011-05-01

    Full Text Available This study examines energy consumption of inputs and output used in mandarin production, and to find relationship between energy inputs and yield in Mazandaran, Iran. Also the Marginal Physical Product (MPP method was used to analyze the sensitivity of energy inputs on mandarin yield and returns to scale of econometric model was calculated. For this purpose, the data were collected from 110 mandarin orchards which were selected based on random sampling method. The results indicated that total energy inputs were 77501.17 MJ/ha. The energy use efficiency, energy productivity and net energy of mandarin production were found as 0.77, 0.41 kg/MJ and -17651.17 MJ/ha. About 41% of the total energy inputs used in mandarin production was indirect while about 59% was direct. Econometric estimation results revealed that the impact of human labor energy (0.37 was found the highest among the other inputs in mandarin production. The results also showed that direct, indirect and renewable and non-renewable, energy forms had a positive and statistically significant impact on output level. The results of sensitivity analysis of the energy inputs showed that with an additional use of 1 MJ of each of the human labor, farmyard manure and chemical fertilizers energy would lead to an increase in yield by 2.05, 1.80 and 1.26 kg, respectively. The results also showed that the MPP value of direct and renewable energy were higher.

  16. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    CERN Document Server

    Mitchell, Lewis

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through th...

  17. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  18. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    Energy Technology Data Exchange (ETDEWEB)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    2016-08-01

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used to extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.

  19. Multi input single output model predictive control of non-linear bio-polymerization process

    Energy Technology Data Exchange (ETDEWEB)

    Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)

    2015-05-15

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.

  20. Input-to-output transformation in a model of the rat hippocampal CA1 network.

    Science.gov (United States)

    Olypher, Andrey V; Lytton, William W; Prinz, Astrid A

    2012-01-01

    Here we use computational modeling to gain new insights into the transformation of inputs in hippocampal field CA1. We considered input-output transformation in CA1 principal cells of the rat hippocampus, with activity synchronized by population gamma oscillations. Prior experiments have shown that such synchronization is especially strong for cells within one millimeter of each other. We therefore simulated a one-millimeter ıt patch of CA1 with 23,500 principal cells. We used morphologically and biophysically detailed neuronal models, each with more than 1000 compartments and thousands of synaptic inputs. Inputs came from binary patterns of spiking neurons from field CA3 and entorhinal cortex (EC). On average, each presynaptic pattern initiated action potentials in the same number of CA1 principal cells in the patch. We considered pairs of similar and pairs of distinct patterns. In all the cases CA1 strongly separated input patterns. However, CA1 cells were considerably more sensitive to small alterations in EC patterns compared to CA3 patterns. Our results can be used for comparison of input-to-output transformations in normal and pathological hippocampal networks.

  1. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  2. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  3. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-04-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on the methane emissions estimated by an atmospheric inversion system. Synthetic methane observations, given by 10 different model outputs from the international TransCom-CH4 model exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the PYVAR-LMDZ-SACS inverse system to produce 10 different methane emission estimates at the global scale for the year 2005. The same set-up has been used to produce the synthetic observations and to compute flux estimates by inverse modelling, which means that only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg CH4 per year at the global scale, representing 5% of the total methane emissions. At continental and yearly scales, transport model errors have bigger impacts depending on the region, ranging from 36 Tg CH4 in north America to 7 Tg CH4 in Boreal Eurasian (from 23% to 48%. At the model gridbox scale, the spread of inverse estimates can even reach 150% of the prior flux. Thus, transport model errors contribute to significant uncertainties on the methane estimates by inverse modelling, especially when small spatial scales are invoked. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher resolution models. The analysis of methane estimated fluxes in these different configurations questions the consistency of transport model errors in current inverse systems. For future methane inversions, an improvement in the modelling of the atmospheric transport would make the estimations more accurate. Likewise, errors of the observation covariance matrix should be more consistently prescribed in future inversions in order to limit the impact of transport model errors on estimated methane

  4. A Model for Gathering Stakeholder Input for Setting Research Priorities at the Land-Grant University.

    Science.gov (United States)

    Kelsey, Kathleen Dodge; Pense, Seburn L.

    2001-01-01

    A model for collecting and using stakeholder input on research priorities is a modification of Guba and Lincoln's model, involving preevaluation preparation, stakeholder identification, information gathering and analysis, interpretive filtering, and negotiation and consensus. A case study at Oklahoma State University illustrates its applicability…

  5. Good Modeling Practice for PAT Applications: Propagation of Input Uncertainty and Sensitivity Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...

  6. Sources of Error in Synthetic Remote Sensing Data and Potential Impacts on Ecohydrological Models in Semiarid Rangelands

    Science.gov (United States)

    Olsoy, P.; Flores, A. N.; Glenn, N. F.

    2014-12-01

    Semiarid rangelands have a high level of both spatial and temporal vegetation heterogeneity due to slow net primary production rates and highly variable rainfall. Ecohydrological modeling in these ecosystems requires high resolution inputs of vegetation structure and function. We used the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) to create eight synthetic Landsat TM images across a growing season (April - September). STARFM fuses the high spatial resolution of Landsat TM with the high temporal resolution of Terra MODIS. Previous attempts to assess the accuracy and quantify model errors of STARFM have used pixel-based regression and difference image analysis, as well as examining the distribution of those errors across land cover types. However, those model errors have not previously been compared to a null model (i.e., using the nearest available Landsat scene). If there is very little change occurring, then you would expect the model to have artificially high correlation coefficients and low error estimates. Additionally, we examined several other potential sources of error: i) time of year or season, ii) vegetation height class from airborne LiDAR, iii) solar radiation (i.e., aspect), and iv) snow. We found that STARFM added new information when compared to the null model, yet the null model was highly accurate during large parts of the growing season (June through September, r2 = 0.95 - 0.97) suggesting that simply reporting r2 values from pixel-based regression is insufficient to assess model accuracy. We found that areas with snow in the preceding model input imagery (NDSI > 0.4) increased errors threefold (RMSE(snow) = 0.3223, RMSE(not-snow) = 0.1017). We also found that pixels with shrub or tree vegetation (height > 0.3 m) tended to have higher errors when compared to ground or grass pixels. Finally, our results indicate that during the middle of the growing season, there are patterns in the error that relate to solar radiation with the

  7. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  8. GEN-IV BENCHMARKING OF TRISO FUEL PERFORMANCE MODELS UNDER ACCIDENT CONDITIONS MODELING INPUT DATA

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Laboratory

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  9. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    Science.gov (United States)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum

  10. Universal geometric error modeling of the CNC machine tools based on the screw theory

    Science.gov (United States)

    Tian, Wenjie; He, Baiyan; Huang, Tian

    2011-05-01

    The methods to improve the precision of the CNC (Computerized Numerical Control) machine tools can be classified into two categories: error prevention and error compensation. Error prevention is to improve the precision via high accuracy in manufacturing and assembly. Error compensation is to analyze the source errors that affect on the machining error, to establish the error model and to reach the ideal position and orientation by modifying the trajectory in real time. Error modeling is the key to compensation, so the error modeling method is of great significance. Many researchers have focused on this topic, and proposed many methods, but we can hardly describe the 6-dimensional configuration error of the machine tools. In this paper, the universal geometric error model of CNC machine tools is obtained utilizing screw theory. The 6-dimensional error vector is expressed with a twist, and the error vector transforms between different frames with the adjoint transformation matrix. This model can describe the overall position and orientation errors of the tool relative to the workpiece entirely. It provides the mathematic model for compensation, and also provides a guideline in the manufacture, assembly and precision synthesis of the machine tools.

  11. Statistical Inference for Partially Linear Regression Models with Measurement Errors

    Institute of Scientific and Technical Information of China (English)

    Jinhong YOU; Qinfeng XU; Bin ZHOU

    2008-01-01

    In this paper, the authors investigate three aspects of statistical inference for the partially linear regression models where some covariates are measured with errors. Firstly,a bandwidth selection procedure is proposed, which is a combination of the difference-based technique and GCV method. Secondly, a goodness-of-fit test procedure is proposed,which is an extension of the generalized likelihood technique. Thirdly, a variable selection procedure for the parametric part is provided based on the nonconcave penalization and corrected profile least squares. Same as "Variable selection via nonconcave penalized like-lihood and its oracle properties" (J. Amer. Statist. Assoc., 96, 2001, 1348-1360), it is shown that the resulting estimator has an oracle property with a proper choice of regu-larization parameters and penalty function. Simulation studies are conducted to illustrate the finite sample performances of the proposed procedures.

  12. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  13. Calibration of parallel kinematics machine using generalized distance error model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper focus on the accuracy enhancement of parallel kinematics machine through kinematics calibration. In the calibration processing, well-structured identification Jacobian matrix construction and end-effector position and orientation measurement are two main difficulties. In this paper, the identification Jacobian matrix is constructed easily by numerical calculation utilizing the unit virtual velocity method. The generalized distance errors model is presented for avoiding measuring the position and orientation directly which is difficult to be measured. At last, a measurement tool is given for acquiring the data points in the calibration processing.Experimental studies confirmed the effectiveness of method. It is also shown in the paper that the proposed approach can be applied to other typed parallel manipulators.

  14. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    Science.gov (United States)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  15. Estimation of Soil Carbon Input in France: An Inverse Modelling Approach

    Institute of Scientific and Technical Information of China (English)

    J.MEERSMANS; M.P.MARTIN; E.LACARCE; T.G.ORTON; S.DE BAETS; M.GOURRAT; N.P.A.SABY

    2013-01-01

    Development of a quantitative understanding of soil organic carbon (SOC) dynamics is vital for management of soil to sequester carbon (C) and maintain fertility,thereby contributing to food security and climate change mitigation.There are well-established process-based models that can be used to simulate SOC stock evolution; however,there are few plant residue C input values and those that exist represent a limited range of environments.This limitation in a fundamental model component (i.e.,C input) constrains the reliability of current SOC stock simulations.This study aimed to estimate crop-specific and environment-specific plant-derived soil C input values for agricultural sites in Prance based on data from 700 sites selected from a recently established French soil monitoring network (the RMQS database).Measured SOC stock values from this large scale soil database were used to constrain an inverse RothC modelling approach to derive estimated C input values consistent with the stocks.This approach allowed us to estimate significant crop-specific C input values (P < 0.05) for 14 out of 17 crop types in the range from 1.84 ± 0.69 t C ha-1 year-1 (silage corn) to 5.15 ± 0.12 t C ha-1 year-1 (grassland/pasture).Furthermore,the incorporation of climate variables improved the predictions.C input of 4 crop types could be predicted as a function of temperature and 8 as a function of precipitation.This study offered an approach to meet the urgent need for crop-specific and environment-specific C input values in order to improve the reliability of SOC stock prediction.

  16. State-shared model for multiple-input multiple-output systems

    Institute of Scientific and Technical Information of China (English)

    Zhenhua TIAN; Karlene A. HOO

    2005-01-01

    This work proposes a method to construct a state-shared model for multiple-input multiple-output (MIMO)systems. A state-shared model is defined as a linear time invariant state-space structure that is driven by measurement signals-the plant outputs and the manipulated variables, but shared by different multiple input/output models. The genesis of the state-shared model is based on a particular reduced non-minimal realization. Any such realization necessarily fulfills the requirement that the output of the state-shared model is an asymptotically correct estimate of the output of the plant, if the process model is selected appropriately. The approach is demonstrated on a nonlinear MIMO system- a physiological model of calcium fluxes that controls muscle contraction and relaxation in human cardiac myocytes.

  17. Bayesian Hierarchical Model Characterization of Model Error in Ocean Data Assimilation and Forecasts

    Science.gov (United States)

    2013-09-30

    tracer concentration measurements are available; circles indicate a regular 19 × 37 spatial grid. Time-Varying Error Covariance Models: Extending...2013. (Wikle) Invited; Using quadratic nonlinear statistical emulators to facilitate ocean biogeochemical data assimilation, Institute for

  18. FUZZY MODEL OPTIMIZATION FOR TIME SERIES DATA USING A TRANSLATION IN THE EXTENT OF MEAN ERROR

    OpenAIRE

    Nurhayadi; ., Subanar; Abdurakhman; Agus Maman Abadi

    2014-01-01

    Recently, many researchers in the field of writing about the prediction of stock price forecasting, electricity load demand and academic enrollment using fuzzy methods. However, in general, modeling does not consider the model position to actual data yet where it means that error is not been handled optimally. The error that is not managed well can reduce the accuracy of the forecasting. Therefore, the paper will discuss reducing error using model translation. The error that will be reduced i...

  19. Error Modelling and Experimental Validation of a Planar 3-PPR Parallel Manipulator with Joint Clearances

    OpenAIRE

    Wu, Guanglei; Shaoping, Bai; Jørgen A., Kepler; Caro, Stéphane

    2012-01-01

    International audience; This paper deals with the error modelling and analysis of a 3-\\underline{P}PR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Using this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements a...

  20. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  1. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NARCIS (Netherlands)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M.P.; Gloor, E.; Houweling, S.; Kawa, S.R.; Krol, M.C.; Patra, P.K.; Prinn, R.G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-01-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, ar

  2. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    Science.gov (United States)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  3. Fourier transform based dynamic error modeling method for ultra-precision machine tool

    Science.gov (United States)

    Chen, Guoda; Liang, Yingchun; Ehmann, Kornel F.; Sun, Yazhou; Bai, Qingshun

    2014-08-01

    In some industrial fields, the workpiece surface need to meet not only the demand of surface roughness, but the strict requirement of multi-scale frequency domain errors. Ultra-precision machine tool is the most important carrier for the ultra-precision machining of the parts, whose errors is the key factor to influence the multi-scale frequency domain errors of the machined surface. The volumetric error modeling is the important bridge to link the relationship between the machine error and machined surface error. However, the available error modeling method from the previous research is hard to use to analyze the relationship between the dynamic errors of the machine motion components and multi-scale frequency domain errors of the machined surface, which plays the important reference role in the design and accuracy improvement of the ultra-precision machine tool. In this paper, a fourier transform based dynamic error modeling method is presented, which is also on the theoretical basis of rigid body kinematics and homogeneous transformation matrix. A case study is carried out, which shows the proposed method can successfully realize the identical and regular numerical description of the machine dynamic errors and the volumetric errors. The proposed method has strong potential for the prediction of the frequency domain errors on the machined surface, extracting of the information of multi-scale frequency domain errors, and analysis of the relationship between the machine motion components and frequency domain errors of the machined surface.

  4. Queueing model for an ATM multiplexer with unequal input/output link capacities

    Science.gov (United States)

    Long, Y. H.; Ho, T. K.; Rad, A. B.; Lam, S. P. S.

    1998-10-01

    We present a queuing model for an ATM multiplexer with unequal input/output link capacities in this paper. This model can be used to analyze the buffer behaviors of an ATM multiplexer which multiplexes low speed input links into a high speed output link. For this queuing mode, we assume that the input and output slot times are not equal, this is quite different from most analysis of discrete-time queues for ATM multiplexer/switch. In the queuing analysis, we adopt a correlated arrival process represented by the Discrete-time Batch Markovian Arrival Process. The analysis is based upon M/G/1 type queue technique which enables easy numerical computation. Queue length distributions observed at different epochs and queue length distribution seen by an arbitrary arrival cell when it enters the buffer are given.

  5. Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters

    DEFF Research Database (Denmark)

    Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.;

    2010-01-01

    investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...... (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we......, and initial velocity), all remaining parameters are varied, resulting in more than 20 runs investigated here. The output parameters considered are velocity, density, magnetic field strength, and temperature. We find that the largest effects on the model output are the input parameters of upper limit...

  6. Entropy Error Model of Planar Geometry Features in GIS

    Institute of Scientific and Technical Information of China (English)

    LI Dajun; GUAN Yunlan; GONG Jianya; DU Daosheng

    2003-01-01

    Positional error of line segments is usually described by using "g-band", however, its band width is in relation to the confidence level choice. In fact, given different confidence levels, a series of concentric bands can be obtained. To overcome the effect of confidence level on the error indicator, by introducing the union entropy theory, we propose an entropy error ellipse index of point, then extend it to line segment and polygon,and establish an entropy error band of line segment and an entropy error donut of polygon. The research shows that the entropy error index can be determined uniquely and is not influenced by confidence level, and that they are suitable for positional uncertainty of planar geometry features.

  7. An Activation-Based Model of Routine Sequence Errors

    Science.gov (United States)

    2015-04-01

    Occasionally, after completing a step, the screen cleared and the participants were interrupted to perform a simple arithmetic task; the interruption...accordance with the columnar data, the distribution of errors clusters around the +/-1 errors, and falls away in both directions as the error type gets...has been accessed in working memory, slowly decaying as time passes. Activation strength- ening is calculated according to: As = ln ( n ∑ j=1 t−dj

  8. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    Science.gov (United States)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  9. A Long-Term Memory Competitive Process Model of a Common Procedural Error

    Science.gov (United States)

    2013-08-01

    A novel computational cognitive model explains human procedural error in terms of declarative memory processes. This is an early version of a process ... model intended to predict and explain multiple classes of procedural error a priori. We begin with postcompletion error (PCE) a type of systematic

  10. Bayesian hierarchical error model for analysis of gene expression data

    National Research Council Canada - National Science Library

    Cho, HyungJun; Lee, Jae K

    2004-01-01

    .... Moreover, the same gene often shows quite heterogeneous error variability under different biological and experimental conditions, which must be estimated separately for evaluating the statistical...

  11. New mathematical model for error reduction of stressed lap

    Science.gov (United States)

    Zhao, Pu; Yang, Shuming; Sun, Lin; Shi, Xinyu; Liu, Tao; Jiang, Zhuangde

    2016-05-01

    Stressed lap, compared to traditional polishing methods, has high processing efficiency. However, this method has disadvantages in processing nonsymmetric surface errors. A basic-function method is proposed to calculate parameters for a stressed-lap polishing system. It aims to minimize residual errors and is based on a matrix and nonlinear optimization algorithm. The results show that residual root-mean-square could be >15% after one process for classical trefoil error. The surface period errors close to the lap diameter were removed efficiently, up to 50% material removal.

  12. Toward high-resolution flash flood prediction in large urban areas - Analysis of sensitivity to spatiotemporal resolution of rainfall input and hydrologic modeling

    Science.gov (United States)

    Rafieeinasab, Arezoo; Norouzi, Amir; Kim, Sunghee; Habibi, Hamideh; Nazari, Behzad; Seo, Dong-Jun; Lee, Haksu; Cosgrove, Brian; Cui, Zhengtao

    2015-12-01

    Urban flash flooding is a serious problem in large, highly populated areas such as the Dallas-Fort Worth Metroplex (DFW). Being able to monitor and predict flash flooding at a high spatiotemporal resolution is critical to providing location-specific early warnings and cost-effective emergency management in such areas. Under the idealized conditions of perfect models and precipitation input, one may expect that spatiotemporal specificity and accuracy of the model output improve as the resolution of the models and precipitation input increases. In reality, however, due to the errors in the precipitation input, and in the structures, parameters and states of the models, there are practical limits to the model resolution. In this work, we assess the sensitivity of streamflow simulation in urban catchments to the spatiotemporal resolution of precipitation input and hydrologic modeling to identify the resolution at which the simulation errors may be at minimum given the quality of the precipitation input and hydrologic models used, and the response time of the catchment. The hydrologic modeling system used in this work is the National Weather Service (NWS) Hydrology Laboratory's Research Distributed Hydrologic Model (HLRDHM) applied at spatiotemporal resolutions ranging from 250 m to 2 km and from 1 min to 1 h applied over the Cities of Fort Worth, Arlington and Grand Prairie in DFW. The high-resolution precipitation input is from the DFW Demonstration Network of the Collaborative Adaptive Sensing of the Atmosphere (CASA) radars. For comparison, the NWS Multisensor Precipitation Estimator (MPE) product, which is available at a 4-km 1-h resolution, was also used. The streamflow simulation results are evaluated for 5 urban catchments ranging in size from 3.4 to 54.6 km2 and from about 45 min to 3 h in time-to-peak in the Cities of Fort Worth, Arlington and Grand Prairie. The streamflow observations used in evaluation were obtained from water level measurements via rating

  13. Stochastic model error in the LANS-alpha and NS-alpha deconvolution models of turbulence

    CERN Document Server

    Olson, Eric

    2015-01-01

    This paper reports on a computational study of the model error in the LANS-alpha and NS-alpha deconvolution models of homogeneous isotropic turbulence. The focus is on how well the model error may be characterized by a stochastic force. Computations are also performed for a new turbulence model obtained as a rescaled limit of the deconvolution model. The technique used is to plug a solution obtained from direct numerical simulation of the incompressible Navier--Stokes equations into the competing turbulence models and to then compute the time evolution of the resulting residual. All computations have been done in two dimensions rather than three for convenience and efficiency. When the effective averaging length scale in any of the models is $\\alpha_0=0.01$ the time evolution of the root-mean-squared residual error grows as $\\sqrt t$. This growth rate is consistent with the hypothesis that the model error may be characterized by a stochastic force. When $\\alpha_0=0.20$ the residual error grows linearly. Linea...

  14. Green Input-Output Model for Power Company Theoretical & Application Analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on the theory of marginal opportunity cost, one kind of green input-output table and models of powercompany are put forward in this paper. For an appliable purpose, analysis of integrated planning, cost analysis, pricingof the power company are also given.

  15. The economic impact of multifunctional agriculture in Dutch regions: An input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2013-01-01

    Multifunctional agriculture is a broad concept lacking a precise definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model was constructed for multifunctional agriculture

  16. The economic impact of multifunctional agriculture in The Netherlands: A regional input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2012-01-01

    Multifunctional agriculture is a broad concept lacking a precise and uniform definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model is constructed for multifunctional

  17. Characteristic operator functions for quantum input-plant-output models and coherent control

    Science.gov (United States)

    Gough, John E.

    2015-01-01

    We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entries that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.

  18. Using a Joint-Input, Multi-Product Formulation to Improve Spatial Price Equilibrium Models

    OpenAIRE

    Bishop, Phillip M.; Pratt, James E.; Novakovic, Andrew M.

    1994-01-01

    Mathematical programming models, as typically formulated for international trade applications, may contain certain implied restrictions which lead to solutions which can be shown to be technically infeasible, or if feasible, then not actually an equilibrium. An alternative formulation is presented which allows joint-inputs and multi-products, with pure transshipment and product substitution forms of arbitrage.

  19. Allowing for model error in strong constraint 4D-Var

    Science.gov (United States)

    Howes, Katherine; Lawless, Amos; Fowler, Alison

    2016-04-01

    Four dimensional variational data assimilation (4D-Var) can be used to obtain the best estimate of the initial conditions of an environmental forecasting model, namely the analysis. In practice, when the forecasting model contains errors, the analysis from the 4D-Var algorithm will be degraded to allow for errors later in the forecast window. This work focusses on improving the analysis at the initial time by allowing for the fact that the model contains error, within the context of strong constraint 4D-Var. The 4D-Var method developed acknowledges the presence of random error in the model at each time step by replacing the observation error covariance matrix with an error covariance matrix that includes both observation error and model error statistics. It is shown that this new matrix represents the correct error statistics of the innovations in the presence of model error. A method for estimating this matrix using innovation statistics, without requiring prior knowledge of the model error statistics, is presented. The method is demonstrated numerically using a non-linear chaotic system with erroneous parameter values. We show that that the new method works to reduce the analysis error covariance when compared with a standard strong constraint 4D-Var scheme. We discuss the fact that an improved analysis will not necessarily provide a better forecast.

  20. A neuromorphic model of motor overflow in focal hand dystonia due to correlated sensory input

    Science.gov (United States)

    Sohn, Won Joon; Niu, Chuanxin M.; Sanger, Terence D.

    2016-10-01

    Objective. Motor overflow is a common and frustrating symptom of dystonia, manifested as unintentional muscle contraction that occurs during an intended voluntary movement. Although it is suspected that motor overflow is due to cortical disorganization in some types of dystonia (e.g. focal hand dystonia), it remains elusive which mechanisms could initiate and, more importantly, perpetuate motor overflow. We hypothesize that distinct motor elements have low risk of motor overflow if their sensory inputs remain statistically independent. But when provided with correlated sensory inputs, pre-existing crosstalk among sensory projections will grow under spike-timing-dependent-plasticity (STDP) and eventually produce irreversible motor overflow. Approach. We emulated a simplified neuromuscular system comprising two anatomically distinct digital muscles innervated by two layers of spiking neurons with STDP. The synaptic connections between layers included crosstalk connections. The input neurons received either independent or correlated sensory drive during 4 days of continuous excitation. The emulation is critically enabled and accelerated by our neuromorphic hardware created in previous work. Main results. When driven by correlated sensory inputs, the crosstalk synapses gained weight and produced prominent motor overflow; the growth of crosstalk synapses resulted in enlarged sensory representation reflecting cortical reorganization. The overflow failed to recede when the inputs resumed their original uncorrelated statistics. In the control group, no motor overflow was observed. Significance. Although our model is a highly simplified and limited representation of the human sensorimotor system, it allows us to explain how correlated sensory input to anatomically distinct muscles is by itself sufficient to cause persistent and irreversible motor overflow. Further studies are needed to locate the source of correlation in sensory input.

  1. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike.

  2. Resonance model for non-perturbative inputs to gluon distributions in the hadrons

    CERN Document Server

    Ermolaev, B I; Troyan, S I

    2015-01-01

    We construct non-perturbative inputs for the elastic gluon-hadron scattering amplitudes in the forward kinematic region for both polarized and non-polarized hadrons. We use the optical theorem to relate invariant scattering amplitudes to the gluon distributions in the hadrons. By analyzing the structure of the UV and IR divergences, we can determine theoretical conditions on the non-perturbative inputs, and use these to construct the results in a generalized Basic Factorization framework using a simple Resonance Model. These results can then be related to the K_T and Collinear Factorization expressions, and the corresponding constrains can be extracted.

  3. Selecting Human Error Types for Cognitive Modelling and Simulation

    NARCIS (Netherlands)

    Mioch, T.; Osterloh, J.P.; Javaux, D.

    2010-01-01

    This paper presents a method that has enabled us to make a selection of error types and error production mechanisms relevant to the HUMAN European project, and discusses the reasons underlying those choices. We claim that this method has the advantage that it is very exhaustive in determining the re

  4. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    , such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  5. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    2012-01-01

    , such as time-evolving shorelines and paleo coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  6. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  7. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  8. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  9. Large uncertainty in soil carbon modelling related to carbon input calculation method

    Science.gov (United States)

    Keel, Sonja G.; Leifeld, Jens; Taghizadeh-Toosi, Arezoo; Oleson, Jørgen E.

    2016-04-01

    A model-based inventory for carbon (C) sinks and sources in agricultural soils is being established for Switzerland. As part of this project, five frequently used allometric equations that estimate soil C inputs based on measured yields are compared. To evaluate the different methods, we calculate soil C inputs for a long-term field trial in Switzerland. This DOK experiment (bio-Dynamic, bio-Organic, and conventional (German: Konventionell)) compares five different management systems, that are applied to identical crop rotations. Average calculated soil C inputs vary largely between allometric equations and range from 1.6 t C ha-1 yr-1 to 2.6 t C ha-1 yr-1. Among the most important crops in Switzerland, the uncertainty is largest for barley (difference between highest and lowest estimate: 3.0 t C ha-1 yr-1). For the unfertilized control treatment, the estimated soil C inputs vary less between allometric equations than for the treatment that received mineral fertilizer and farmyard manure. Most likely, this is due to the higher yields in the latter treatment, i.e. the difference between methods might be amplified because yields differ more. To evaluate the influence of these allometric equations on soil C dynamics we simulate the DOK trial for the years 1977-2004 using the model C-TOOL (Taghizadeh-Toosi et al. 2014) and the five different soil C input calculation methods. Across all treatments, C-TOOL simulates a decrease in soil C in line with the experimental data. This decline, however, varies between allometric equations (-2.4 t C ha-1 to -6.3 t C ha-1 for the years 1977-2004) and has the same order of magnitude as the difference between treatments. In summary, the method to estimate soil C inputs is identified as a significant source of uncertainty in soil C modelling. Choosing an appropriate allometric equation to derive the input data is thus a critical step when setting up a model-based national soil C inventory. References Taghizadeh-Toosi A et al. (2014) C

  10. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells.

    Science.gov (United States)

    Vidne, Michael; Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W; Kulkarni, Jayant; Litke, Alan M; Chichilnisky, E J; Simoncelli, Eero; Paninski, Liam

    2012-08-01

    Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations.

  11. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  12. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1999-01-01

    instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems.......Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...

  13. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  14. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled di

  15. Estimation of sectoral prices in the BNL energy input--output model

    Energy Technology Data Exchange (ETDEWEB)

    Tessmer, R.G. Jr.; Groncki, P.; Boyce, G.W. Jr.

    1977-12-01

    Value-added coefficients have been incorporated into Brookhaven's Energy Input-Output Model so that one can calculate the implicit price at which each sector sells its output to interindustry and final-demand purchasers. Certain adjustments to historical 1967 data are required because of the unique structure of the model. Procedures are also described for projecting energy-sector coefficients in future years that are consistent with exogenously specified energy prices.

  16. Global Behaviors of a Chemostat Model with Delayed Nutrient Recycling and Periodically Pulsed Input

    Directory of Open Access Journals (Sweden)

    Kai Wang

    2010-01-01

    Full Text Available The dynamic behaviors in a chemostat model with delayed nutrient recycling and periodically pulsed input are studied. By introducing new analysis technique, the sufficient and necessary conditions on the permanence and extinction of the microorganisms are obtained. Furthermore, by using the Liapunov function method, the sufficient condition on the global attractivity of the model is established. Finally, an example is given to demonstrate the effectiveness of the results in this paper.

  17. Use of Generalised Linear Models to quantify rainfall input uncertainty to hydrological modelling in the Upper Nile

    Science.gov (United States)

    Kigobe, M.; McIntyre, N.; Wheater, H. S.

    2009-04-01

    Interest in the application of climate and hydrological models in the Nile basin has risen in the recent past; however, the first drawback for most efforts has been the estimation of historic precipitation patterns. In this study we have applied stochastic models to infill and extend observed data sets to generate inputs for hydrological modelling. Several stochastic climate models within the Generalised Linear Modelling (GLM) framework have been applied to reproduce spatial and temporal patterns of precipitation in the Kyoga basin. A logistic regression model (describing rainfall occurrence) and a gamma distribution (describing rainfall amounts) are used to model rainfall patterns. The parameters of the models are functions of spatial and temporal covariates, and are fitted to the observed rainfall data using log-likelihood methods. Using the fitted model, multi-site rainfall sequences over the Kyoga basin are generated stochastically as a function of the dominant seasonal, climatic and geographic controls. The rainfall sequences generated are then used to drive a semi distributed hydrological model using the Soil Water and Assessment Tool (SWAT). The sensitivity of runoff to uncertainty associated with missing precipitation records is thus tested. In an application to the Lake Kyoga catchment, the performance of the hydrological model highly depends on the spatial representation of the input precipitation patterns, model parameterisation and the performance of the GLM stochastic models used to generate the input rainfall. The results obtained so far disclose that stochastic models can be developed for several climatic regions within the Kyoga basin; and, given identification of a stochastic rainfall model; input uncertainty due to precipitation can be usefully quantified. The ways forward for rainfall modelling and hydrological simulation in Uganda and the Upper Nile are discussed. Key Words: Precipitation, Generalised Linear Models, Input Uncertainty, Soil Water

  18. Modelling groundwater discharge areas using only digital elevation models as input data

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2006-10-15

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  19. Regional disaster impact analysis: comparing input-output and computable general equilibrium models

    Science.gov (United States)

    Koks, Elco E.; Carrera, Lorenzo; Jonkeren, Olaf; Aerts, Jeroen C. J. H.; Husby, Trond G.; Thissen, Mark; Standardi, Gabriele; Mysiak, Jaroslav

    2016-08-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of them in combination with noneconomic methods. While both IO and CGE models are widely used, they are mainly compared on theoretical grounds. Few studies have compared disaster impacts of different model types in a systematic way and for the same geographical area, using similar input data. Such a comparison is valuable from both a scientific and policy perspective as the magnitude and the spatial distribution of the estimated losses are born likely to vary with the chosen modelling approach (IO, CGE, or hybrid). Hence, regional disaster impact loss estimates resulting from a range of models facilitate better decisions and policy making. Therefore, this study analyses the economic consequences for a specific case study, using three regional disaster impact models: two hybrid IO models and a CGE model. The case study concerns two flood scenarios in the Po River basin in Italy. Modelling results indicate that the difference in estimated total (national) economic losses and the regional distribution of those losses may vary by up to a factor of 7 between the three models, depending on the type of recovery path. Total economic impact, comprising all Italian regions, is negative in all models though.

  20. Error Threshold for Spatially Resolved Evolution in the Quasispecies Model

    Energy Technology Data Exchange (ETDEWEB)

    Altmeyer, S.; McCaskill, J. S.

    2001-06-18

    The error threshold for quasispecies in 1, 2, 3, and {infinity} dimensions is investigated by stochastic simulation and analytically. The results show a monotonic decrease in the maximal sustainable error probability with decreasing diffusion coefficient, independently of the spatial dimension. It is thereby established that physical interactions between sequences are necessary in order for spatial effects to enhance the stabilization of biological information. The analytically tractable behavior in an {infinity} -dimensional (simplex) space provides a good guide to the spatial dependence of the error threshold in lower dimensional Euclidean space.

  1. A comparison of numerical and machine-learning modeling of soil water content with limited input data

    Science.gov (United States)

    Karandish, Fatemeh; Šimůnek, Jiří

    2016-12-01

    Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a

  2. Formulation of a hybrid calibration approach for a physically based distributed model with NEXRAD data input

    Science.gov (United States)

    Di Luzio, Mauro; Arnold, Jeffrey G.

    2004-10-01

    This paper describes the background, formulation and results of an hourly input-output calibration approach proposed for the Soil and Water Assessment Tool (SWAT) watershed model, presented for 24 representative storm events occurring during the period between 1994 and 2000 in the Blue River watershed (1233 km 2 located in Oklahoma). This effort is the first follow up to the participation in the National Weather Service-Distributed Modeling Intercomparison Project (DMIP), an opportunity to apply, for the first time within the SWAT modeling framework, routines for hourly stream flow prediction based on gridded precipitation (NEXRAD) data input. Previous SWAT model simulations, uncalibrated and with moderate manual calibration (only the water balance over the calibration period), were provided for the entire set of watersheds and associated outlets for the comparison designed in the DMIP project. The extended goal of this follow up was to verify the model efficiency in simulating hourly hydrographs calibrating each storm event using the formulated approach. This included a combination of a manual and an automatic calibration approach (Shuffled Complex Evolution Method) and the use of input parameter values allowed to vary only within their physical extent. While the model provided reasonable water budget results with minimal calibration, event simulations with the revised calibration were significantly improved. The combination of NEXRAD precipitation data input, the soil water balance and runoff equations, along with the calibration strategy described in the paper, appear to adequately describe the storm events. The presented application and the formulated calibration method are initial steps toward the improvement of the simulation on an hourly basis of the SWAT model loading variables associated with the storm flow, such as sediment and pollutants, and the success of Total Maximum Daily Load (TMDL) projects.

  3. Consolidating soil carbon turnover models by improved estimates of belowground carbon input

    Science.gov (United States)

    Taghizadeh-Toosi, Arezoo; Christensen, Bent T.; Glendining, Margaret; Olesen, Jørgen E.

    2016-09-01

    World soil carbon (C) stocks are third only to those in the ocean and earth crust, and represent twice the amount currently present in the atmosphere. Therefore, any small change in the amount of soil organic C (SOC) may affect carbon dioxide (CO2) concentrations in the atmosphere. Dynamic models of SOC help reveal the interaction among soil carbon systems, climate and land management, and they are also frequently used to help assess SOC dynamics. Those models often use allometric functions to calculate soil C inputs in which the amount of C in both above and below ground crop residues are assumed to be proportional to crop harvest yield. Here we argue that simulating changes in SOC stocks based on C input that are proportional to crop yield is not supported by data from long-term experiments with measured SOC changes. Rather, there is evidence that root C inputs are largely independent of crop yield, but crop specific. We discuss implications of applying fixed belowground C input regardless of crop yield on agricultural greenhouse gas mitigation and accounting.

  4. Application of a Linear Input/Output Model to Tankless Water Heaters

    Energy Technology Data Exchange (ETDEWEB)

    Butcher T.; Schoenbauer, B.

    2011-12-31

    In this study, the applicability of a linear input/output model to gas-fired, tankless water heaters has been evaluated. This simple model assumes that the relationship between input and output, averaged over both active draw and idle periods, is linear. This approach is being applied to boilers in other studies and offers the potential to make a small number of simple measurements to obtain the model parameters. These parameters can then be used to predict performance under complex load patterns. Both condensing and non-condensing water heaters have been tested under a very wide range of load conditions. It is shown that this approach can be used to reproduce performance metrics, such as the energy factor, and can be used to evaluate the impacts of alternative draw patterns and conditions.

  5. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, D.

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  6. Analytical modeling of the input admittance of an electric drive for stability analysis purposes

    Science.gov (United States)

    Girinon, S.; Baumann, C.; Piquet, H.; Roux, N.

    2009-07-01

    Embedded electric HVDC distribution network are facing difficult issues on quality and stability concerns. In order to help to resolve those problems, this paper proposes to develop an analytical model of an electric drive. This self-contained model includes an inverter, its regulation loops and the PMSM. After comparing the model with its equivalent (abc) full model, the study focuses on frequency analysis. The association with an input filter helps in expressing stability of the whole assembly by means of Routh-Hurtwitz criterion.

  7. New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints

    Directory of Open Access Journals (Sweden)

    Qing Lu

    2014-01-01

    Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.

  8. A hippocampal cognitive prosthesis: multi-input, multi-output nonlinear modeling and VLSI implementation.

    Science.gov (United States)

    Berger, Theodore W; Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; LaCoss, Jeff; Wills, Jack; Hampson, Robert E; Deadwyler, Sam A; Granacki, John J

    2012-03-01

    This paper describes the development of a cognitive prosthesis designed to restore the ability to form new long-term memories typically lost after damage to the hippocampus. The animal model used is delayed nonmatch-to-sample (DNMS) behavior in the rat, and the "core" of the prosthesis is a biomimetic multi-input/multi-output (MIMO) nonlinear model that provides the capability for predicting spatio-temporal spike train output of hippocampus (CA1) based on spatio-temporal spike train inputs recorded presynaptically to CA1 (e.g., CA3). We demonstrate the capability of the MIMO model for highly accurate predictions of CA1 coded memories that can be made on a single-trial basis and in real-time. When hippocampal CA1 function is blocked and long-term memory formation is lost, successful DNMS behavior also is abolished. However, when MIMO model predictions are used to reinstate CA1 memory-related activity by driving spatio-temporal electrical stimulation of hippocampal output to mimic the patterns of activity observed in control conditions, successful DNMS behavior is restored. We also outline the design in very-large-scale integration for a hardware implementation of a 16-input, 16-output MIMO model, along with spike sorting, amplification, and other functions necessary for a total system, when coupled together with electrode arrays to record extracellularly from populations of hippocampal neurons, that can serve as a cognitive prosthesis in behaving animals.

  9. Statistical analysis-based error models for the Microsoft Kinect(TM) depth sensor.

    Science.gov (United States)

    Choo, Benjamin; Landau, Michael; DeVore, Michael; Beling, Peter A

    2014-09-18

    The stochastic error characteristics of the Kinect sensing device are presented for each axis direction. Depth (z) directional error is measured using a flat surface, and horizontal (x) and vertical (y) errors are measured using a novel 3D checkerboard. Results show that the stochastic nature of the Kinect measurement error is affected mostly by the depth at which the object being sensed is located, though radial factors must be considered, as well. Measurement and statistics-based models are presented for the stochastic error in each axis direction, which are based on the location and depth value of empirical data measured for each pixel across the entire field of view. The resulting models are compared against existing Kinect error models, and through these comparisons, the proposed model is shown to be a more sophisticated and precise characterization of the Kinect error distributions.

  10. Statistical Analysis-Based Error Models for the Microsoft Kinect™ Depth Sensor

    Science.gov (United States)

    Choo, Benjamin; Landau, Michael; DeVore, Michael; Beling, Peter A.

    2014-01-01

    The stochastic error characteristics of the Kinect sensing device are presented for each axis direction. Depth (z) directional error is measured using a flat surface, and horizontal (x) and vertical (y) errors are measured using a novel 3D checkerboard. Results show that the stochastic nature of the Kinect measurement error is affected mostly by the depth at which the object being sensed is located, though radial factors must be considered, as well. Measurement and statistics-based models are presented for the stochastic error in each axis direction, which are based on the location and depth value of empirical data measured for each pixel across the entire field of view. The resulting models are compared against existing Kinect error models, and through these comparisons, the proposed model is shown to be a more sophisticated and precise characterization of the Kinect error distributions. PMID:25237896

  11. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    OpenAIRE

    2016-01-01

    Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS), leading to errors in the spatial distributionof atmospheric forcing. ...

  12. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    Science.gov (United States)

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for

  13. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  14. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Jean-Paul; Glas, Cees A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  15. Making refractive error services sustainable: the International Eye Foundation model

    Directory of Open Access Journals (Sweden)

    Victoria M Sheffield

    2007-09-01

    Full Text Available The International Eye Foundation (IEF believes that the most effective strategy for making spectacles affordable and accessible is to integrate refractive error services into ophthalmic services and to run the refractive error service as a business – thereby making it sustainable. An optical service should be able to deal with high volumes of patients and generate enough revenue – not just to cover its own costs, but also to contribute to ophthalmic clinical services.

  16. Hubble Frontier Fields: systematic errors in strong lensing models of galaxy clusters - implications for cosmography

    Science.gov (United States)

    Acebron, Ana; Jullo, Eric; Limousin, Marceau; Tilquin, André; Giocoli, Carlo; Jauzac, Mathilde; Mahler, Guillaume; Richard, Johan

    2017-09-01

    Strong gravitational lensing by galaxy clusters is a fundamental tool to study dark matter and constrain the geometry of the Universe. Recently, the Hubble Space Telescope Frontier Fields programme has allowed a significant improvement of mass and magnification measurements but lensing models still have a residual root mean square between 0.2 arcsec and few arcseconds, not yet completely understood. Systematic errors have to be better understood and treated in order to use strong lensing clusters as reliable cosmological probes. We have analysed two simulated Hubble-Frontier-Fields-like clusters from the Hubble Frontier Fields Comparison Challenge, Ares and Hera. We use several estimators (relative bias on magnification, density profiles, ellipticity and orientation) to quantify the goodness of our reconstructions by comparing our multiple models, optimized with the parametric software lenstool, with the input models. We have quantified the impact of systematic errors arising, first, from the choice of different density profiles and configurations and, secondly, from the availability of constraints (spectroscopic or photometric redshifts, redshift ranges of the background sources) in the parametric modelling of strong lensing galaxy clusters and therefore on the retrieval of cosmological parameters. We find that substructures in the outskirts have a significant impact on the position of the multiple images, yielding tighter cosmological contours. The need for wide-field imaging around massive clusters is thus reinforced. We show that competitive cosmological constraints can be obtained also with complex multimodal clusters and that photometric redshifts improve the constraints on cosmological parameters when considering a narrow range of (spectroscopic) redshifts for the sources.

  17. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation.

    Science.gov (United States)

    Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S

    2017-10-01

    The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The problem with total error models in establishing performance specifications and a simple remedy.

    Science.gov (United States)

    Krouwer, Jan S

    2016-08-01

    A recent issue in this journal revisited performance specifications since the Stockholm conference. Of the three recommended methods, two use total error models to establish performance specifications. It is shown that the most commonly used total error model - the Westgard model - is deficient, yet even more complete models fail to capture all errors that comprise total error. Moreover, total error models are often set at 95% of results, which leave 5% of results as unspecified. Glucose meter performance standards are used to illustrate these problems. The Westgard model is useful to asses assay performance but not to set performance specifications. Total error can be used to set performance specifications if the specifications include 100% of the results.

  19. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.

  20. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network

    Directory of Open Access Journals (Sweden)

    Adam ePonzi

    2012-03-01

    Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response

  1. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Science.gov (United States)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.

  2. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Science.gov (United States)

    Monfardini, Elisabetta; Hadj-Bouziane, Fadila; Meunier, Martine

    2014-01-01

    Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  3. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    Science.gov (United States)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  4. Modeling and Sensitivity Analysis of Navigation Parameter Errors for Airborne Synthetic Aperture Radar Stereo Geolocation

    Institute of Scientific and Technical Information of China (English)

    PANG Lei; ZHANG Jixian; YAN Qin

    2010-01-01

    For the high-resolution airborne synthetic aperture radar (SAR) stereo geolocation application, the final geolocation accuracy is influenced by various error parameter sources. In this paper, an airborne SAR stereo geolocation parameter error model,involving the parameter errors derived from the navigation system on the flight platform, has been put forward. Moreover, a kind of near-direct method for modeling and sensitivity analysis of navigation parameter errors is also given. This method directly uses the ground reference to calculate the covariance matrix relationship between the parameter errors and the eventual geolocation errors for ground target points. In addition, utilizing true flight track parameters' errors, this paper gave a verification of the method and a corresponding sensitivity analysis for airborne SAR stereo geolocation model and proved its efficiency.

  5. The use of error-category mapping in pharmacokinetic model analysis of dynamic contrast-enhanced MRI data.

    Science.gov (United States)

    Gill, Andrew B; Anandappa, Gayathri; Patterson, Andrew J; Priest, Andrew N; Graves, Martin J; Janowitz, Tobias; Jodrell, Duncan I; Eisen, Tim; Lomas, David J

    2015-02-01

    This study introduces the use of 'error-category mapping' in the interpretation of pharmacokinetic (PK) model parameter results derived from dynamic contrast-enhanced (DCE-) MRI data. Eleven patients with metastatic renal cell carcinoma were enrolled in a multiparametric study of the treatment effects of bevacizumab. For the purposes of the present analysis, DCE-MRI data from two identical pre-treatment examinations were analysed by application of the extended Tofts model (eTM), using in turn a model arterial input function (AIF), an individually-measured AIF and a sample-average AIF. PK model parameter maps were calculated. Errors in the signal-to-gadolinium concentration ([Gd]) conversion process and the model-fitting process itself were assigned to category codes on a voxel-by-voxel basis, thereby forming a colour-coded 'error-category map' for each imaged slice. These maps were found to be repeatable between patient visits and showed that the eTM converged adequately in the majority of voxels in all the tumours studied. However, the maps also clearly indicated sub-regions of low Gd uptake and of non-convergence of the model in nearly all tumours. The non-physical condition ve ≥ 1 was the most frequently indicated error category and appeared sensitive to the form of AIF used. This simple method for visualisation of errors in DCE-MRI could be used as a routine quality-control technique and also has the potential to reveal otherwise hidden patterns of failure in PK model applications.

  6. Stable isotopes and Digital Elevation Models to study nutrient inputs in high-Arctic lakes

    Science.gov (United States)

    Calizza, Edoardo; Rossi, David; Costantini, Maria Letizia; Careddu, Giulio; Rossi, Loreto

    2016-04-01

    Ice cover, run-off from the watershed, aquatic and terrestrial primary productivity, guano deposition from birds are key factors controlling nutrient and organic matter inputs in high-Arctic lakes. All these factors are expected to be significantly affected by climate change. Quantifying these controls is a key baseline step to understand what combination of factors subtends the biological productivity in Arctic lakes and will drive their ecological response to environmental change. Basing on Digital Elevation Models, drainage maps, and C and N elemental content and stable isotope analysis in sediments, aquatic vegetation and a dominant macroinvertebrate species (Lepidurus arcticus Pallas 1973) belonging to Tvillingvatnet, Storvatnet and Kolhamna, three lakes located in North Spitsbergen (Svalbard), we propose an integrated approach for the analysis of (i) nutrient and organic matter inputs in lakes; (ii) the role of catchment hydro-geomorphology in determining inter-lake differences in the isotopic composition of sediments; (iii) effects of diverse nutrient inputs on the isotopic niche of Lepidurus arcticus. Given its high run-off and large catchment, organic deposits in Tvillingvatnet where dominated by terrestrial inputs, whereas inputs were mainly of aquatic origin in Storvatnet, a lowland lake with low potential run-off. In Kolhamna, organic deposits seem to be dominated by inputs from birds, which actually colonise the area. Isotopic signatures were similar between samples within each lake, representing precise tracers for studies on the effect of climate change on biogeochemical cycles in lakes. The isotopic niche of L. aricticus reflected differences in sediments between lakes, suggesting a bottom-up effect of hydro-geomorphology characterizing each lake on nutrients assimilated by this species. The presented approach proven to be an effective research pathway for the identification of factors subtending to nutrient and organic matter inputs and transfer

  7. Estimating input parameters from intracellular recordings in the Feller neuronal model

    Science.gov (United States)

    Bibbona, Enrico; Lansky, Petr; Sirovich, Roberta

    2010-03-01

    We study the estimation of the input parameters in a Feller neuronal model from a trajectory of the membrane potential sampled at discrete times. These input parameters are identified with the drift and the infinitesimal variance of the underlying stochastic diffusion process with multiplicative noise. The state space of the process is restricted from below by an inaccessible boundary. Further, the model is characterized by the presence of an absorbing threshold, the first hitting of which determines the length of each trajectory and which constrains the state space from above. We compare, both in the presence and in the absence of the absorbing threshold, the efficiency of different known estimators. In addition, we propose an estimator for the drift term, which is proved to be more efficient than the others, at least in the explored range of the parameters. The presence of the threshold makes the estimates of the drift term biased, and two methods to correct it are proposed.

  8. A diffusion model for drying of a heat sensitive solid under multiple heat input modes.

    Science.gov (United States)

    Sun, Lan; Islam, Md Raisul; Ho, J C; Mujumdar, A S

    2005-09-01

    To obtain optimal drying kinetics as well as quality of the dried product in a batch dryer, the energy required may be supplied by combining different modes of heat transfer. In this work, using potato slice as a model heat sensitive drying object, experimental studies were conducted using a batch heat pump dryer designed to permit simultaneous application of conduction and radiation heat. Four heat input schemes were compared: pure convection, radiation-coupled convection, conduction-coupled convection and radiation-conduction-coupled convection. A two-dimensional drying model was developed assuming the drying rate to be controlled by liquid water diffusion. Both drying rates and temperatures within the slab during drying under all these four heat input schemes showed good accord with measurements. Radiation-coupled convection is the recommended heat transfer scheme from the viewpoint of high drying rate and low energy consumption.

  9. On the redistribution of existing inputs using the spherical frontier dea model

    Directory of Open Access Journals (Sweden)

    José Virgilio Guedes de Avellar

    2010-04-01

    Full Text Available The Spherical Frontier DEA Model (SFM (Avellar et al., 2007 was developed to be used when one wants to fairly distribute a new and fixed input to a group of Decision Making Units (DMU's. SFM's basic idea is to distribute this new and fixed input in such a way that every DMU will be placed on an efficiency frontier with a spherical shape. We use SFM to analyze the problems that appear when one wants to redistribute an already existing input to a group of DMU's such that the total sum of this input will remain constant. We also analyze the case in which this total sum may vary.O Modelo de Fronteira Esférica (MFE (Avellar et al., 2007 foi desenvolvido para ser usado quando se deseja distribuir de maneira justa um novo insumo a um conjunto de unidades tomadoras de decisão (DMU's, da sigla em inglês, Decision Making Units. A ideia básica do MFE é a de distribuir esse novo insumo de maneira que todas as DMU's sejam colocadas numa fronteira de eficiência com um formato esférico. Neste artigo, usamos MFE para analisar o problema que surge quando se deseja redistribuir um insumo já existente para um grupo de DMU's de tal forma que a soma desse insumo para todas as DMU's se mantenha constante. Também analisamos o caso em que essa soma possa variar.

  10. Better temperature predictions in geothermal modelling by improved quality of input parameters

    DEFF Research Database (Denmark)

    Fuchs, Sven; Bording, Thue Sylvester; Balling, N.

    2015-01-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties...... region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (ii) the consideration of laterally varying input data (reflecting changes of thermofacies in the project area) significantly improves...

  11. Minimal state space realisation of continuous-time linear time-variant input-output models

    Science.gov (United States)

    Goos, J.; Pintelon, R.

    2016-04-01

    In the linear time-invariant (LTI) framework, the transformation from an input-output equation into state space representation is well understood. Several canonical forms exist that realise the same dynamic behaviour. If the coefficients become time-varying however, the LTI transformation no longer holds. We prove by induction that there exists a closed-form expression for the observability canonical state space model, using binomial coefficients.

  12. Integrated Flight Mechanic and Aeroelastic Modelling and Control of a Flexible Aircraft Considering Multidimensional Gust Input

    Science.gov (United States)

    2000-05-01

    INTEGRATED FLIGHT MECHANIC AND AEROELASTIC MODELLING AND CONTROL OF A FLEXIBLE AIRCRAFT CONSIDERING MULTIDIMENSIONAL GUST INPUT Patrick Teufel, Martin Hanel...the lateral separation distance have been developed by ’ = matrix of two dimensional spectrum function Eichenbaum 4 and are described by Bessel...Journal of Aircraft, Vol. 30, No. 5, Sept.-Oct. 1993 Relations to Risk Sensitivity, System & Control Letters 11, [4] Eichenbaum F.D., Evaluation of 3D

  13. The Role of Spatio-Temporal Resolution of Rainfall Inputs on a Landscape Evolution Model

    Science.gov (United States)

    Skinner, C. J.; Coulthard, T. J.

    2015-12-01

    Landscape Evolution Models are important experimental tools for understanding the long-term development of landscapes. Designed to simulate timescales ranging from decades to millennia, they are usually driven by precipitation inputs that are lumped, both spatially across the drainage basin, and temporally to daily, monthly, or even annual rates. This is based on an assumption that the spatial and temporal heterogeneity of the rainfall will equalise over the long timescales simulated. However, recent studies (Coulthard et al., 2012) have shown that such models are sensitive to event magnitudes, with exponential increases in sediment yields generated by linear increases in flood event size at a basin scale. This suggests that there may be a sensitivity to the spatial and temporal scales of rainfall used to drive such models. This study uses the CAESAR-Lisflood Landscape Evolution Model to investigate the impact of spatial and temporal resolution of rainfall input on model outputs. The sediment response to a range of temporal (15 min to daily) and spatial (5 km to 50km) resolutions over three different drainage basin sizes was observed. The results showed the model was sensitive to both, generating up to 100% differences in modelled sediment yields with smaller spatial and temporal resolution precipitation. Larger drainage basins also showed a greater sensitivity to both spatial and temporal resolution. Furthermore, analysis of the distribution of erosion and deposition patterns suggested that small temporal and spatial resolution inputs increased erosion in drainage basin headwaters and deposition in the valley floors. Both of these findings may have implications for existing models and approaches for simulating landscape development.

  14. Quantification of Transport Model Error Impacts on CO2 Inversions Using NASA's GEOS-5 GCM

    Science.gov (United States)

    Ott, L.; Pawson, S.; Weir, B.

    2014-12-01

    Remote sensing observations of CO2 offer the opportunity to reduce uncertainty in global carbon flux estimates. However, a number of studies have shown that inversion flux estimates are strongly influenced by errors in model transport. We will present results from modeling studies designed to quantify how such errors influence simulations of surface and column CO2 mixing ratios. These studies were conducted using the Goddard Earth Observing System, version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM) and the implementation of a suite of tracers associated with errors in boundary layer, convective, and large scale transport. Unlike traditional tagged tracers which are emitted by a certain process or region, error tracers are emitted as air parcels are transported through the atmosphere. The magnitude of error tracer emissions is based on previously published ensembles of AGCM simulations with perturbations to subgrid convective and boundary layer transport, and on comparisons of several reanalysis products to estimate errors in large scale wind fields. Transport error tracers are simulated with several different e-folding lifetimes (e.g. 1, 4, 10, and 30 day) to examine differences between transient and persistent model errors. This quantification of transport error is then used in an illustrative Bayesian synthesis inversion to demonstrate how transport errors influence surface CO2 mixing ratios and how this translates into inferred biosphere flux error.

  15. A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement.

    Science.gov (United States)

    He, Yan-Lin; Geng, Zhi-Qiang; Xu, Yuan; Zhu, Qun-Xiong

    2015-09-01

    In this paper, a robust hybrid model integrating an enhanced inputs based extreme learning machine with the partial least square regression (PLSR-EIELM) was proposed. The proposed PLSR-EIELM model can overcome two main flaws in the extreme learning machine (ELM), i.e. the intractable problem in determining the optimal number of the hidden layer neurons and the over-fitting phenomenon. First, a traditional extreme learning machine (ELM) is selected. Second, a method of randomly assigning is applied to the weights between the input layer and the hidden layer, and then the nonlinear transformation for independent variables can be obtained from the output of the hidden layer neurons. Especially, the original input variables are regarded as enhanced inputs; then the enhanced inputs and the nonlinear transformed variables are tied together as the whole independent variables. In this way, the PLSR can be carried out to identify the PLS components not only from the nonlinear transformed variables but also from the original input variables, which can remove the correlation among the whole independent variables and the expected outputs. Finally, the optimal relationship model of the whole independent variables with the expected outputs can be achieved by using PLSR. Thus, the PLSR-EIELM model is developed. Then the PLSR-EIELM model served as an intelligent measurement tool for the key variables of the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. The experimental results show that the predictive accuracy of PLSR-EIELM is stable, which indicate that PLSR-EIELM has good robust character. Moreover, compared with ELM, PLSR, hierarchical ELM (HELM), and PLSR-ELM, PLSR-EIELM can achieve much smaller predicted relative errors in these two applications.

  16. Analyzing the sensitivity of a flood risk assessment model towards its input data

    Science.gov (United States)

    Glas, Hanne; Deruyter, Greet; De Maeyer, Philippe; Mandal, Arpita; James-Williamson, Sherene

    2016-11-01

    The Small Island Developing States are characterized by an unstable economy and low-lying, densely populated cities, resulting in a high vulnerability to natural hazards. Flooding affects more people than any other hazard. To limit the consequences of these hazards, adequate risk assessments are indispensable. Satisfactory input data for these assessments are hard to acquire, especially in developing countries. Therefore, in this study, a methodology was developed and evaluated to test the sensitivity of a flood model towards its input data in order to determine a minimum set of indispensable data. In a first step, a flood damage assessment model was created for the case study of Annotto Bay, Jamaica. This model generates a damage map for the region based on the flood extent map of the 2001 inundations caused by Tropical Storm Michelle. Three damages were taken into account: building, road and crop damage. Twelve scenarios were generated, each with a different combination of input data, testing one of the three damage calculations for its sensitivity. One main conclusion was that population density, in combination with an average number of people per household, is a good parameter in determining the building damage when exact building locations are unknown. Furthermore, the importance of roads for an accurate visual result was demonstrated.

  17. Error Modeling and Analysis for InSAR Spatial Baseline Determination of Satellite Formation Flying

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Spatial baseline determination is a key technology for interferometric synthetic aperture radar (InSAR missions. Based on the intersatellite baseline measurement using dual-frequency GPS, errors induced by InSAR spatial baseline measurement are studied in detail. The classifications and characters of errors are analyzed, and models for errors are set up. The simulations of single factor and total error sources are selected to evaluate the impacts of errors on spatial baseline measurement. Single factor simulations are used to analyze the impact of the error of a single type, while total error sources simulations are used to analyze the impacts of error sources induced by GPS measurement, baseline transformation, and the entire spatial baseline measurement, respectively. Simulation results show that errors related to GPS measurement are the main error sources for the spatial baseline determination, and carrier phase noise of GPS observation and fixing error of GPS receiver antenna are main factors of errors related to GPS measurement. In addition, according to the error values listed in this paper, 1 mm level InSAR spatial baseline determination should be realized.

  18. [Bivariate statistical model for calculating phosphorus input loads to the river from point and nonpoint sources].

    Science.gov (United States)

    Chen, Ding-Jiang; Sun, Si-Yang; Jia, Ying-Na; Chen, Jia-Bo; Lü, Jun

    2013-01-01

    Based on the hydrological difference between the point source (PS) and nonpoint source (NPS) pollution processes and the major influencing mechanism of in-stream retention processes, a bivariate statistical model was developed for relating river phosphorus load to river water flow rate and temperature. Using the calibrated and validated four model coefficients from in-stream monitoring data, monthly phosphorus input loads to the river from PS and NPS can be easily determined by the model. Compared to current hydrologica methods, this model takes the in-stream retention process and the upstream inflow term into consideration; thus it improves the knowledge on phosphorus pollution processes and can meet the requirements of both the district-based and watershed-based wate quality management patterns. Using this model, total phosphorus (TP) input load to the Changle River in Zhejiang Province was calculated. Results indicated that annual total TP input load was (54.6 +/- 11.9) t x a(-1) in 2004-2009, with upstream water inflow, PS and NPS contributing to 5% +/- 1%, 12% +/- 3% and 83% +/- 3%, respectively. The cumulative NPS TP input load during the high flow periods (i. e. , June, July, August and September) in summer accounted for 50% +/- 9% of the annual amount, increasing the alga blooming risk in downstream water bodies. Annual in-stream TP retention load was (4.5 +/- 0.1) t x a(-1) and occupied 9% +/- 2% of the total input load. The cumulative in-stream TP retention load during the summer periods (i. e. , June-September) accounted for 55% +/- 2% of the annual amount, indicating that in-stream retention function plays an important role in seasonal TP transport and transformation processes. This bivariate statistical model only requires commonly available in-stream monitoring data (i. e. , river phosphorus load, water flow rate and temperature) with no requirement of special software knowledge; thus it offers researchers an managers with a cost-effective tool for

  19. Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes

    Science.gov (United States)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all

  20. Bootstrap rank-ordered conditional mutual information (broCMI): A nonlinear input variable selection method for water resources modeling

    Science.gov (United States)

    Quilty, John; Adamowski, Jan; Khalil, Bahaa; Rathinasamy, Maheswaran

    2016-03-01

    The input variable selection problem has recently garnered much interest in the time series modeling community, especially within water resources applications, demonstrating that information theoretic (nonlinear)-based input variable selection algorithms such as partial mutual information (PMI) selection (PMIS) provide an improved representation of the modeled process when compared to linear alternatives such as partial correlation input selection (PCIS). PMIS is a popular algorithm for water resources modeling problems considering nonlinear input variable selection; however, this method requires the specification of two nonlinear regression models, each with parametric settings that greatly influence the selected input variables. Other attempts to develop input variable selection methods using conditional mutual information (CMI) (an analog to PMI) have been formulated under different parametric pretenses such as k nearest-neighbor (KNN) statistics or kernel density estimates (KDE). In this paper, we introduce a new input variable selection method based on CMI that uses a nonparametric multivariate continuous probability estimator based on Edgeworth approximations (EA). We improve the EA method by considering the uncertainty in the input variable selection procedure by introducing a bootstrap resampling procedure that uses rank statistics to order the selected input sets; we name our proposed method bootstrap rank-ordered CMI (broCMI). We demonstrate the superior performance of broCMI when compared to CMI-based alternatives (EA, KDE, and KNN), PMIS, and PCIS input variable selection algorithms on a set of seven synthetic test problems and a real-world urban water demand (UWD) forecasting experiment in Ottawa, Canada.

  1. Dispersion modeling of accidental releases of toxic gases - Sensitivity study and optimization of the meteorological input

    Science.gov (United States)

    Baumann-Stanzer, K.; Stenzel, S.

    2009-04-01

    Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. Uncertainties in the meteorological input together with incorrect estimates of the source play a critical role for the model results. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program at the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. This presentation gives a short introduction to the project and presents the results of task 1 (meteorological input). The results of task 2 are presented by Stenzel and Baumann-Stanzer in this session. For the aim of this project, the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) was used. INCA (Integrated Nowcasting through Comprehensive Analysis) data were calculated with 1 km horizontal resolution and

  2. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  3. A Generalized Process Model of Human Action Selection and Error and its Application to Error Prediction

    Science.gov (United States)

    2014-07-01

    Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection

  4. Water Yield and Sediment Yield Simulations for Teba Catchment in Spain Using SWRRB Model: Ⅰ. Model Input and Simulation Experiment

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Water yield and sediment yield in the Teba catchment, Spain, were simulated using SWRRB (Simulator for Water Resources in Rural Basins) model. The model is composed of 198 mathematical equations. About 120 items (variables) were input for the simulation, including meteorological and climatic factors, hydrologic factors, topographic factors, parent materials, soils, vegetation, human activities, etc. The simulated results involved surface runoff, subsurface runoff, sediment, peak flow, evapotranspiration, soil water, total biomass,etc. Careful and thorough input data preparation and repeated simulation experiments are the key to get the accurate results. In this work the simulation accuracy for annual water yield prediction reached to 83.68%.``

  5. Error Propagation in Equations for Geochemical Modeling of Radiogenic Isotopes in Two-Component Mixing

    Indian Academy of Sciences (India)

    Surendra P Verma

    2000-03-01

    This paper presents error propagation equations for modeling of radiogenic isotopes during mixing of two components or end-members. These equations can be used to estimate errors on an isotopic ratio in the mixture of two components, as a function of the analytical errors or the total errors of geological field sampling and analytical errors. Two typical cases (``Small errors'' and ``Large errors'') are illustrated for mixing of Sr isotopes. Similar examples can be formulated for the other radiogenic isotopic ratios. Actual isotopic data for sediment and basalt samples from the Cocos plate are also included to further illustrate the use of these equations. The isotopic compositions of the predicted mixtures can be used to constrain the origin of magmas in the central part of the Mexican Volcanic Belt. These examples show the need of high quality experimental data for them to be useful in geochemical modeling of magmatic processes.

  6. Development and estimation of a semi-compensatory model with flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    -response model and the utility-based choice by alternatively (i) a nested-logit model and (ii) an error-component logit. In order to test the suggested methodology, the model was estimated for a sample of 1,893 ranked choices and respective threshold values from 631 students who participated in a web-based two......, a disadvantage of current semi-compensatory models versus compensatory models is their behaviorally non-realistic assumption of an independent error structure. This study proposes a novel semi-compensatory model incorporating a flexible error structure. Specifically, the model represents a sequence...

  7. FUZZY MODEL OPTIMIZATION FOR TIME SERIES DATA USING A TRANSLATION IN THE EXTENT OF MEAN ERROR

    Directory of Open Access Journals (Sweden)

    Nurhayadi

    2014-01-01

    Full Text Available Recently, many researchers in the field of writing about the prediction of stock price forecasting, electricity load demand and academic enrollment using fuzzy methods. However, in general, modeling does not consider the model position to actual data yet where it means that error is not been handled optimally. The error that is not managed well can reduce the accuracy of the forecasting. Therefore, the paper will discuss reducing error using model translation. The error that will be reduced is Mean Square Error (MSE. Here, the analysis is done mathematically and the empirical study is done by applying translation to fuzzy model for enrollment forecasting at the Alabama University. The results of this analysis show that the translation in the extent of mean error can reduce the MSE.

  8. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    Institute of Scientific and Technical Information of China (English)

    Qianjian GUO; Shuo FAN; Rufeng XU; Xiang CHENG; Guoyong ZHAO; Jianguo YANG

    2017-01-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools,spindle thermal error measurement,modeling and compensation of a two turntable five-axis machine tool are researched.Measurement experiment of heat sources and thermal errors are carried out,and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling.In order to analyze the influence of different heat sources on spindle thermal errors,an ANN (artificial neural network) model is presented,and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN,a new ABCNN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors.In order to test the prediction performance of ABC-NN model,an experiment system is developed,the prediction results of LSR (least squares regression),ANN and ABC-NN are compared with the measurement results of spindle thermal errors.Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN,and the residual error is smaller than 3 μm,the new modeling method is feasible.The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  9. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    Science.gov (United States)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-03-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  10. Error assessment of digital elevation models obtained by interpolation

    Directory of Open Access Journals (Sweden)

    Jean François Mas

    2009-10-01

    Full Text Available Son pocos los estudios enfocados en la evaluación de los errores inherentes a los modelos digitales de elevación (MDE. Por esta razón se evaluaron los errores de los MDE obtenidos por diferentes metodos de interpolación (ARC/INFO, IDRISI, ILWIS y NEW-MIEL y con diferentes resoluciones, con la finalidad de obtener una representación del relieve más precisa. Esta evaluación de los métodos de interpolación es crucial, si se tiene en cuenta que los MDE son la forma más efectiva de representación de la superficie terrestre para el análisis del terreno y que son ampliamente utilizados en ciencias ambientales. Los resultados obtenidos muestran que la resolución, el método de interpolación y los insumos (curvas de nivel solas o con datos de escurrimientos y puntos acotados influyen de manera importante en la magnitud de la cantidad de los errores generados en el MDE. En este estudio, que se llevó a cabo con base en curvas de nivel cada 50 m en una zona montañosa, la resolución más idónea fue de 30 m. El MDE con el menor error (Error Medio Cuadrático −EMC− de 7.3 m fue obtenido con ARC/INFO. Sin embargo, programas sin costo como NEWMIEL o ILWIS permitieron la obtención de resultados con un EMC de 10 m.

  11. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    B. Scherllin-Pirscher

    2011-05-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS radio occultation (RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3 % in bending angle, 0.1 % in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters and account for vertical, latitudinal, and seasonal variations. In the model, which spans the altitude range from 4 km to 35 km, a constant error is adopted around the tropopause region amounting to 0.8 % for bending angle, 0.35 % for refractivity, 0.15 % for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases exponentially. The observational error model is the same for UCAR and WEGC data but due to somewhat different error characteristics below about 10 km and above about 20 km some parameters have to be adjusted. Overall, the observational error model is easily applicable and

  12. Unitary input DEA model to identify beef cattle production systems typologies

    Directory of Open Access Journals (Sweden)

    Eliane Gonçalves Gomes

    2012-08-01

    Full Text Available The cow-calf beef production sector in Brazil has a wide variety of operating systems. This suggests the identification and the characterization of homogeneous regions of production, with consequent implementation of actions to achieve its sustainability. In this paper we attempted to measure the performance of 21 livestock modal production systems, in their cow-calf phase. We measured the performance of these systems, considering husbandry and production variables. The proposed approach is based on data envelopment analysis (DEA. We used unitary input DEA model, with apparent input orientation, together with the efficiency measurements generated by the inverted DEA frontier. We identified five modal production systems typologies, using the isoefficiency layers approach. The results showed that the knowledge and the processes management are the most important factors for improving the efficiency of beef cattle production systems.

  13. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    of the evolution of waves. The model is analyzed using random sampling techniques and nonintrusive methods based on generalized polynomial chaos (PC). These methods allow us to accurately and efficiently estimate the probability distribution of the solution and require only the computation of the solution...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...

  14. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples.

  15. Dynamics of a Stage Structured Pest Control Model in a Polluted Environment with Pulse Pollution Input

    Directory of Open Access Journals (Sweden)

    Bing Liu

    2013-01-01

    Full Text Available By using pollution model and impulsive delay differential equation, we formulate a pest control model with stage structure for natural enemy in a polluted environment by introducing a constant periodic pollutant input and killing pest at different fixed moments and investigate the dynamics of such a system. We assume only that the natural enemies are affected by pollution, and we choose the method to kill the pest without harming natural enemies. Sufficient conditions for global attractivity of the natural enemy-extinction periodic solution and permanence of the system are obtained. Numerical simulations are presented to confirm our theoretical results.

  16. System Identification for Nonlinear FOPDT Model with Input-Dependent Dead-Time

    DEFF Research Database (Denmark)

    Sun, Zhen; Yang, Zhenyu

    2011-01-01

    . In order to identify these parameters in an online manner, the considered system is discretized at first. Then, the nonlinear FOPDT identification problem is formulated as a stochastic Mixed Integer Non-Linear Programming problem, and an identification algorithm is proposed by combining the Branch......An on-line iterative method of system identification for a kind of nonlinear FOPDT system is proposed in the paper. The considered nonlinear FOPDT model is an extension of the standard FOPDT model by means that its dead time depends on the input signal and the other parameters are time dependent...

  17. Meta Modeling of Transmission Error for Spur, Helical and Planetary Gears for Wind Turbine Application

    OpenAIRE

    Irfan, Muhammad

    2013-01-01

    Detailed analysis of drive train dynamics requires accounting for the transmission error that arises in gears. However, the direct computation of the transmission error requires a 3-dimensional contact analysis with correct gear geometry, which is impractically computationally intense. Therefore, a simplified representation of the transmission error is desired, a so-called meta-model, is developed. The model is based on response surface method, and the coefficients of the angle-dependent tran...

  18. Thermal Error Modeling of a Machining Center Using Grey System Theory and Adaptive Network-Based Fuzzy Inference System

    Science.gov (United States)

    Wang, Kun-Chieh; Tseng, Pai-Chung; Lin, Kuo-Ming

    Thermal effect on machine tools is a well-recognized problem in an environment of increasing demand for product quality. The performance of a thermal error compensation system typically depends on the accuracy and robustness of the thermal error model. This work presents a novel thermal error model utilizing two mathematic schemes: the grey system theory and the adaptive network-based fuzzy inference system (ANFIS). First, the measured temperature and deformation results are analyzed via the grey system theory to obtain the influence ranking of temperature ascent on thermal drift of spindle. Then, using the highly ranked temperature ascents as inputs for the ANFIS and training these data by the hybrid learning rule, a thermal compensation model is constructed. The grey system theory effectively reduces the number of temperature sensors needed on a machine structure for prediction, and the ANFIS has the advantages of good accuracy and robustness. For testing the performance of proposed ANFIS model, a real-cutting operation test was conducted. Comparison results demonstrate that the modeling schemes of the ANFIS coupled with the grey system theory has good predictive ability.

  19. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

    Science.gov (United States)

    Dichev, D.; Koev, H.; Bakalova, T.; Louda, P.

    2014-08-01

    The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error formation. The model reflects the specific nature of the formation of the dynamic error inertial component. In addition, the model submits to the logical interrelation and sequence of the physical processes that form it. The effectiveness, usefulness and advantages of the model proposed are rooted in the wide range of possibilities it provides in relation to the analysis and synthesis of those measuring instruments, the formulation of algorithms and optimization criteria, as well as the development of new intelligent measuring systems with improved accuracy characteristics in dynamic mode.

  20. Correction of approximation errors with Random Forests applied to modelling of aerosol first indirect effect

    Directory of Open Access Journals (Sweden)

    A. Lipponen

    2013-04-01

    Full Text Available In atmospheric models, due to their computational time or resource limitations, physical processes have to be simulated using reduced models. The use of a reduced model, however, induces errors to the simulation results. These errors are referred to as approximation errors. In this paper, we propose a novel approach to correct these approximation errors. We model the approximation error as an additive noise process in the simulation model and employ the Random Forest (RF regression algorithm for constructing a computationally low cost predictor for the approximation error. In this way, the overall simulation problem is decomposed into two separate and computationally efficient simulation problems: solution of the reduced model and prediction of the approximation error realization. The approach is tested for handling approximation errors due to a reduced coarse sectional representation of aerosol size distribution in a cloud droplet activation calculation. The results show a significant improvement in the accuracy of the simulation compared to the conventional simulation with a reduced model. The proposed approach is rather general and extension of it to different parameterizations or reduced process models that are coupled to geoscientific models is a straightforward task. Another major benefit of this method is that it can be applied to physical processes that are dependent on a large number of variables making them difficult to be parameterized by traditional methods.

  1. Predicting input impedance and efficiency of graphene reconfigurable dipoles using a simple circuit model

    CERN Document Server

    Tamagnone, Michele

    2014-01-01

    An analytical circuit model able to predict the input impedance of reconfigurable graphene plasmonic dipoles is presented. A suitable definition of plasmonic characteristic impedance, employing natural currents, is used to for consistent modeling of the antenna-load connection in the circuit. In its purely analytical form, the model shows good agreement with full-wave simulations, and explains the remarkable tuning properties of graphene antennas. Furthermore, using a single full-wave simulation and scaling laws, additional parasitic elements can be determined for a vast parametric space, leading to very accurate modeling. Finally, we also show that the modeling approach allows fair estimation of radiation efficiency as well. The approach also applies to thin plasmonic antennas realized using noble metals or semiconductors.

  2. Quantifying uncertainty in climatological fields from GPS radio occultation: an empirical-analytical error model

    Directory of Open Access Journals (Sweden)

    B. Scherllin-Pirscher

    2011-05-01

    Full Text Available Due to the measurement principle of the radio occultation (RO technique, RO data are highly suitable for climate studies. Single RO profiles can be used to build climatological fields of different atmospheric parameters like bending angle, refractivity, density, pressure, geopotential height, and temperature. RO climatologies are affected by random (statistical errors, sampling errors, and systematic errors, yielding a total climatological error. Based on empirical error estimates, we provide a simple analytical error model for these error components, which accounts for vertical, latitudinal, and seasonal variations. The vertical structure of each error component is modeled constant around the tropopause region. Above this region the error increases exponentially, below the increase follows an inverse height power-law. The statistical error strongly depends on the number of measurements. It is found to be the smallest error component for monthly mean 10° zonal mean climatologies with more than 600 measurements per bin. Due to smallest atmospheric variability, the sampling error is found to be smallest at low latitudes equatorwards of 40°. Beyond 40°, this error increases roughly linearly, with a stronger increase in hemispheric winter than in hemispheric summer. The sampling error model accounts for this hemispheric asymmetry. However, we recommend to subtract the sampling error when using RO climatologies for climate research since the residual sampling error remaining after such subtraction is estimated to be 50 % of the sampling error for bending angle and 30 % or less for the other atmospheric parameters. The systematic error accounts for potential residual biases in the measurements as well as in the retrieval process and generally dominates the total climatological error. Overall the total error in monthly means is estimated to be smaller than 0.07 % in refractivity and 0.15 K in temperature at low to mid latitudes, increasing towards

  3. Neural correlates of sensory prediction errors in monkeys: evidence for internal models of voluntary self-motion in the cerebellum.

    Science.gov (United States)

    Cullen, Kathleen E; Brooks, Jessica X

    2015-02-01

    During self-motion, the vestibular system makes essential contributions to postural stability and self-motion perception. To ensure accurate perception and motor control, it is critical to distinguish between vestibular sensory inputs that are the result of externally applied motion (exafference) and that are the result of our own actions (reafference). Indeed, although the vestibular sensors encode vestibular afference and reafference with equal fidelity, neurons at the first central stage of sensory processing selectively encode vestibular exafference. The mechanism underlying this reafferent suppression compares the brain's motor-based expectation of sensory feedback with the actual sensory consequences of voluntary self-motion, effectively computing the sensory prediction error (i.e., exafference). It is generally thought that sensory prediction errors are computed in the cerebellum, yet it has been challenging to explicitly demonstrate this. We have recently addressed this question and found that deep cerebellar nuclei neurons explicitly encode sensory prediction errors during self-motion. Importantly, in everyday life, sensory prediction errors occur in response to changes in the effector or world (muscle strength, load, etc.), as well as in response to externally applied sensory stimulation. Accordingly, we hypothesize that altering the relationship between motor commands and the actual movement parameters will result in the updating in the cerebellum-based computation of exafference. If our hypothesis is correct, under these conditions, neuronal responses should initially be increased--consistent with a sudden increase in the sensory prediction error. Then, over time, as the internal model is updated, response modulation should decrease in parallel with a reduction in sensory prediction error, until vestibular reafference is again suppressed. The finding that the internal model predicting the sensory consequences of motor commands adapts for new

  4. Development of an RTK-GPS Positioning Application with an Improved Position Error Model for Smartphones

    Directory of Open Access Journals (Sweden)

    Dongha Lee

    2012-09-01

    Full Text Available This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error.

  5. Development of an RTK-GPS positioning application with an improved position error model for smartphones.

    Science.gov (United States)

    Hwang, Jinsang; Yun, Hongsik; Suh, Yongcheol; Cho, Jeongho; Lee, Dongha

    2012-09-25

    This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error.

  6. Sensitivity to Estimation Errors in Mean-variance Models

    Institute of Scientific and Technical Information of China (English)

    Zhi-ping Chen; Cai-e Zhao

    2003-01-01

    In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.

  7. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    Science.gov (United States)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  8. Analysis of errors in spectral reconstruction with a Laplace transform pair model

    Energy Technology Data Exchange (ETDEWEB)

    Archer, B.R.; Bushong, S.C. (Baylor Univ., Houston, TX (USA). Coll. of Medicine); Wagner, L.K. (Texas Univ., Houston (USA). Dept. of Radiology); Johnston, D.A.; Almond, P.R. (Anderson (M.D.) Hospital and Tumor Inst., Houston, TX (USA))

    1985-05-01

    The sensitivity of a Laplace transform pair model for spectral reconstruction to random errors in attenuation measurements of diagnostic x-ray units has been investigated. No spectral deformation or significant alteration resulted from the simulated attenuation errors. It is concluded that the range of spectral uncertainties to be expected from the application of this model is acceptable for most scientific applications.

  9. Modeling Distance and Bandwidth Dependency of TOA-Based UWB Ranging Error for Positioning

    NARCIS (Netherlands)

    Bellusci, G.; Janssen, G.J.M.; Yan, J.; Tiberius, C.C.J.M.

    2009-01-01

    A statistical model for the range error provided by TOA estimation using UWB signals is given, based on UWB channel measurements between 3.1 and 10.6 GHz. The range error has been modeled as a Gaussian random variable for LOS and as a combination of a Gaussian and an exponential random variable for

  10. On the Influence of Weather Forecast Errors in Short-Term Load Forecasting Models

    OpenAIRE

    Fay, D; Ringwood, John; Condon, M.

    2004-01-01

    Weather information is an important factor in load forecasting models. This weather information usually takes the form of actual weather readings. However, online operation of load forecasting models requires the use of weather forecasts, with associated weather forecast errors. A technique is proposed to model weather forecast errors to reflect current accuracy. A load forecasting model is then proposed which combines the forecasts of several load forecasting models. This approach allows the...

  11. Input impedance and reflection coefficient in fractal-like models of asymmetrically branching compliant tubes.

    Science.gov (United States)

    Brown, D J

    1996-07-01

    A mathematical model is described, based on linear transmission line theory, for the computation of hydraulic input impedance spectra in complex, dichotomously branching networks similar to mammalian arterial systems. Conceptually, the networks are constructed from a discretized set of self-similar compliant tubes whose dimensions are described by an integer power law. The model allows specification of the branching geometry, i.e., the daughter-parent branch area ratio and the daughter-daughter area asymmetry ratio, as functions of vessel size. Characteristic impedances of individual vessels are described by linear theory for a fully constrained thick-walled elastic tube. Besides termination impedances and fluid density and viscosity, other model parameters included relative vessel length and phase velocity, each as a function of vessel size (elastic nonuniformity). The primary goal of the study was to examine systematically the effect of fractal branching asymmetry, both degree and location within the network, on the complex input impedance spectrum and reflection coefficient. With progressive branching asymmetry, fractal model spectra exhibit some of the features inherent in natural arterial systems such as the loss of prominent, regularly-occurring maxima and minima; the effect is most apparent at higher frequencies. Marked reduction of the reflection coefficient occurs, due to disparities in wave path length, when branching is asymmetric. Because of path length differences, branching asymmetry near the system input has a far greater effect on minimizing spectrum oscillations and reflections than downstream asymmetry. Fractal-like constructs suggest a means by which arterial trees of realistic complexity might be described, both structurally and functionally.

  12. The MARINA model (Model to Assess River Inputs of Nutrients to seAs): Model description and results for China.

    Science.gov (United States)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-08-15

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients to seAs (MARINA) for China. The MARINA Nutrient Model quantifies river export of nutrients by source at the sub-basin scale as a function of human activities on land. MARINA is a downscaled version for China of the Global NEWS-2 (Nutrient Export from WaterSheds) model with an improved approach for nutrient losses from animal production and population. We use the model to quantify dissolved inorganic and organic nitrogen (N) and phosphorus (P) export by six large rivers draining into the Bohai Gulf (Yellow, Hai, Liao), Yellow Sea (Yangtze, Huai) and South China Sea (Pearl) in 1970, 2000 and 2050. We addressed uncertainties in the MARINA Nutrient model. Between 1970 and 2000 river export of dissolved N and P increased by a factor of 2-8 depending on sea and nutrient form. Thus, the risk for coastal eutrophication increased. Direct losses of manure to rivers contribute to 60-78% of nutrient inputs to the Bohai Gulf and 20-74% of nutrient inputs to the other seas in 2000. Sewage is an important source of dissolved inorganic P, and synthetic fertilizers of dissolved inorganic N. Over half of the nutrients exported by the Yangtze and Pearl rivers originated from human activities in downstream and middlestream sub-basins. The Yellow River exported up to 70% of dissolved inorganic N and P from downstream sub-basins and of dissolved organic N and P from middlestream sub-basins. Rivers draining into the Bohai Gulf are drier, and thus transport fewer nutrients. For the future we calculate further increases in river export of nutrients. The MARINA Nutrient model quantifies the main sources of coastal water pollution for sub-basins. This information can contribute to formulation of

  13. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    U. Foelsche

    2011-09-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3% in bending angle, 0.1% in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above 35 km the increase of the CHAMP raw bending angle observational error is more pronounced than that of GRACE-A and F3C leading to a larger observational error of about 1% at 42 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process particularly under conditions when ionospheric residual is large. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters for the altitude range of 4 km to 35 km and up to 50 km for UCAR raw bending angle and refractivity. In the model, which accounts for vertical, latitudinal, and seasonal variations, a constant error is adopted around the tropopause region amounting to 0.8% for bending angle, 0.35% for refractivity, 0.15% for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases

  14. Error Modeling and Compensation of Circular Motion on a New Circumferential Drilling System

    Directory of Open Access Journals (Sweden)

    Qiang Fang

    2015-01-01

    Full Text Available A new flexible circumferential drilling system is proposed to drill on the fuselage docking area. To analyze the influence of the circular motion error to the drilling accuracy, the nominal forward kinematic model is derived using Denavit-Hartenberg (D-H method and this model is further developed to model the kinematic errors caused by circular positioning error and synchronization error using homogeneous transformation matrices (HTM. A laser tracker is utilized to measure the circular motion error of the two measurement points at both sides. A circular motion compensation experiment is implemented according to the calculated positioning error and synchronization error. Experimental results show that the positioning error and synchronization error were reduced by 65.0% and 58.8%, respectively, due to the adopted compensation, and therefore the circular motion accuracy is substantially improved. Finally, position errors of the two measurement points are analyzed to have little influence on the measurement result and the validity of the proposed compensation method is proved.

  15. Nutrient inputs to the Laurentian Great Lakes by source and watershed estimated using SPARROW watershed models

    Science.gov (United States)

    Robertson, Dale M.; Saad, David A.

    2011-01-01

    Nutrient input to the Laurentian Great Lakes continues to cause problems with eutrophication. To reduce the extent and severity of these problems, target nutrient loads were established and Total Maximum Daily Loads are being developed for many tributaries. Without detailed loading information it is difficult to determine if the targets are being met and how to prioritize rehabilitation efforts. To help address these issues, SPAtially Referenced Regressions On Watershed attributes (SPARROW) models were developed for estimating loads and sources of phosphorus (P) and nitrogen (N) from the United States (U.S.) portion of the Great Lakes, Upper Mississippi, Ohio, and Red River Basins. Results indicated that recent U.S. loadings to Lakes Michigan and Ontario are similar to those in the 1980s, whereas loadings to Lakes Superior, Huron, and Erie decreased. Highest loads were from tributaries with the largest watersheds, whereas highest yields were from areas with intense agriculture and large point sources of nutrients. Tributaries were ranked based on their relative loads and yields to each lake. Input from agricultural areas was a significant source of nutrients, contributing ∼33-44% of the P and ∼33-58% of the N, except for areas around Superior with little agriculture. Point sources were also significant, contributing ∼14-44% of the P and 13-34% of the N. Watersheds around Lake Erie contributed nutrients at the highest rate (similar to intensively farmed areas in the Midwest) because they have the largest nutrient inputs and highest delivery ratio.

  16. Self-Triggered Model Predictive Control for Linear Systems Based on Transmission of Control Input Sequences

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2016-01-01

    Full Text Available A networked control system (NCS is a control system where components such as plants and controllers are connected through communication networks. Self-triggered control is well known as one of the control methods in NCSs and is a control method that for sampled-data control systems both the control input and the aperiodic sampling interval (i.e., the transmission interval are computed simultaneously. In this paper, a self-triggered model predictive control (MPC method for discrete-time linear systems with disturbances is proposed. In the conventional MPC method, the first one of the control input sequence obtained by solving the finite-time optimal control problem is sent and applied to the plant. In the proposed method, the first some elements of the control input sequence obtained are sent to the plant, and each element is sequentially applied to the plant. The number of elements is decided according to the effect of disturbances. In other words, transmission intervals can be controlled. Finally, the effectiveness of the proposed method is shown by numerical simulations.

  17. Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.

    Science.gov (United States)

    Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan

    2014-01-01

    Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.

  18. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, P.W.

    2017-01-24

    There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.

  19. Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.

    Directory of Open Access Journals (Sweden)

    Lixiao Zhang

    Full Text Available Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.

  20. Error budget analysis of SCIAMACHY limb ozone profile retrievals using the SCIATRAN model

    Directory of Open Access Journals (Sweden)

    N. Rahpoe

    2013-10-01

    Full Text Available A comprehensive error characterization of SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric CHartographY limb ozone profiles has been established based upon SCIATRAN transfer model simulations. The study was carried out in order to evaluate the possible impact of parameter uncertainties, e.g. in albedo, stratospheric aerosol optical extinction, temperature, pressure, pointing, and ozone absorption cross section on the limb ozone retrieval. Together with the a posteriori covariance matrix available from the retrieval, total random and systematic errors are defined for SCIAMACHY ozone profiles. Main error sources are the pointing errors, errors in the knowledge of stratospheric aerosol parameters, and cloud interference. Systematic errors are of the order of 7%, while the random error amounts to 10–15% for most of the stratosphere. These numbers can be used for the interpretation of instrument intercomparison and validation of the SCIAMACHY V 2.5 limb ozone profiles in a rigorous manner.

  1. Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US

    Directory of Open Access Journals (Sweden)

    T. Myers

    2012-04-01

    Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the contiguous United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.

  2. A synaptic input portal for a mapped clock oscillator model of neuronal electrical rhythmic activity

    Science.gov (United States)

    Zariffa, José; Ebden, Mark; Bardakjian, Berj L.

    2004-09-01

    Neuronal electrical oscillations play a central role in a variety of situations, such as epilepsy and learning. The mapped clock oscillator (MCO) model is a general model of transmembrane voltage oscillations in excitable cells. In order to be able to investigate the behaviour of neuronal oscillator populations, we present a neuronal version of the model. The neuronal MCO includes an extra input portal, the synaptic portal, which can reflect the biological relationships in a chemical synapse between the frequency of the presynaptic action potentials and the postsynaptic resting level, which in turn affects the frequency of the postsynaptic potentials. We propose that the synaptic input-output relationship must include a power function in order to be able to reproduce physiological behaviour such as resting level saturation. One linear and two power functions (Butterworth and sigmoidal) are investigated, using the case of an inhibitory synapse. The linear relation was not able to produce physiologically plausible behaviour, whereas both the power function examples were appropriate. The resulting neuronal MCO model can be tailored to a variety of neuronal cell types, and can be used to investigate complex population behaviour, such as the influence of network topology and stochastic resonance.

  3. Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US

    Directory of Open Access Journals (Sweden)

    T. Myers

    2013-01-01

    Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the continental United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.

  4. Rigorous model-based uncertainty quantification with application to terminal ballistics, part I: Systems with controllable inputs and small scatter

    Science.gov (United States)

    Kidane, A.; Lashgari, A.; Li, B.; McKerns, M.; Ortiz, M.; Owhadi, H.; Ravichandran, G.; Stalzer, M.; Sullivan, T. J.

    2012-05-01

    This work is concerned with establishing the feasibility of a data-on-demand (DoD) uncertainty quantification (UQ) protocol based on concentration-of-measure inequalities. Specific aims are to establish the feasibility of the protocol and its basic properties, including the tightness of the predictions afforded by the protocol. The assessment is based on an application to terminal ballistics and a specific system configuration consisting of 6061-T6 aluminum plates struck by spherical S-2 tool steel projectiles at ballistic impact speeds. The system's inputs are the plate thickness and impact velocity and the perforation area is chosen as the sole performance measure of the system. The objective of the UQ analysis is to certify the lethality of the projectile, i.e., that the projectile perforates the plate with high probability over a prespecified range of impact velocities and plate thicknesses. The net outcome of the UQ analysis is an M/U ratio, or confidence factor, of 2.93, indicative of a small probability of no perforation of the plate over its entire operating range. The high-confidence (>99.9%) in the successful operation of the system afforded the analysis and the small number of tests (40) required for the determination of the modeling-error diameter, establishes the feasibility of the DoD UQ protocol as a rigorous yet practical approach for model-based certification of complex systems.

  5. Laser spot detection-based computer interface system using autoassociative multilayer perceptron with input-to-output mapping-sensitive error back propagation learning algorithm

    Science.gov (United States)

    Jeong, Sungmoon; Jung, Chanwoong; Kim, Cheol-Su; Shim, Jae Hoon; Lee, Minho

    2011-08-01

    This paper presents a new computer interface system based on laser spot detection and moving pattern analysis of the detected laser spots in real-time processing. We propose a systematic method that uses either the frame difference of successive input images or an autoassociative multilayer perceptron (AAMLP) to detect laser spots. The AAMLP is applied only to areas of the input images where the frame difference of the successive images is not effective for detecting laser spots. In order to enhance the detection performance, the AAMLP is trained by a new training algorithm that increases the sensitivity of the input-to-output mapping of the AAMLP allowing a small variation in the input feature of the laser spot image to be successfully indicated. The proposed interface system is also able to keep track of the laser spot and recognize gesture commands. The moving pattern of the laser spot is recognized by using a multilayer perception. It is experimentally shown that the proposed computer interface system is fast enough for real-time operation with reliable accuracy.

  6. MODELING AND COMPENSATION TECHNIQUE FOR THE GEOMETRIC ERRORS OF FIVE-AXIS CNC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    One of the important trends in precision machining is the development of real-time error compensation technique.The error compensation for multi-axis CNC machine tools is very difficult and attractive.The modeling for the geometric error of five-axis CNC machine tools based on multi-body systems is proposed.And the key technique of the compensation-identifying geometric error parameters-is developed.The simulation of cutting workpiece to verify the modeling based on the multi-body systems is also considered.

  7. Good modeling practice for PAT applications: propagation of input uncertainty and sensitivity analysis.

    Science.gov (United States)

    Sin, Gürkan; Gernaey, Krist V; Lantz, Anna Eliasson

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input uncertainty resulting from assumptions of the model was propagated using the Monte Carlo procedure to estimate the output uncertainty. The results showed that significant uncertainty exists in the model outputs. Moreover the uncertainty in the biomass, glucose, ammonium and base-consumption were found low compared to the large uncertainty observed in the antibiotic and off-gas CO(2) predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters (about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass-transfer. Overall the uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes.

  8. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  9. Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.

  10. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    Science.gov (United States)

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection.

  11. LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints

    Science.gov (United States)

    Swei, Sean S.M.; Ayoubi, Mohammad A.

    2017-01-01

    This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.

  12. Determination of growth rates as an input of the stock discount valuation models

    Directory of Open Access Journals (Sweden)

    Momčilović Mirela

    2013-01-01

    Full Text Available When determining the value of the stocks with different stock discount valuation models, one of the important inputs is expected growth rate of dividends, earnings, cash flows and other relevant parameters of the company. Growth rate can be determined by three basic ways, and those are: on the basis of extrapolation of historical data, on the basis of professional assessment of the analytics who follow business of the company and on the basis of fundamental indicators of the company. Aim of this paper is to depict theoretical basis and practical application of stated methods for growth rate determination, and to indicate their advantages, or deficiencies.

  13. A leech model for homeostatic plasticity and motor network recovery after loss of descending inputs.

    Science.gov (United States)

    Lane, Brian J

    2016-04-01

    Motor networks below the site of spinal cord injury (SCI) and their reconfiguration after loss of central inputs are poorly understood but remain of great interest in SCI research. Harley et al. (J Neurophysiol 113: 3610-3622, 2015) report a striking locomotor recovery paradigm in the leech Hirudo verbena with features that are functionally analogous to SCI. They propose that this well-established neurophysiological system could potentially be repurposed to provide a complementary model to investigate basic principles of homeostatic compensation relevant to SCI research.

  14. OOK power model based dynamic error testing for smart electricity meter

    Science.gov (United States)

    Wang, Xuewei; Chen, Jingxia; Yuan, Ruiming; Jia, Xiaolu; Zhu, Meng; Jiang, Zhenyu

    2017-02-01

    This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%.

  15. Adaptive control for an uncertain robotic manipulator with input saturations

    Institute of Scientific and Technical Information of China (English)

    Trong-Toan TRAN; Shuzhi Sam GE; Wei HE

    2016-01-01

    In this paper, we address the control problem of an uncertain robotic manipulator with input saturations, unknown input scalings and disturbances. For this purpose, a model reference adaptive control like (MRAC-like) is used to handle the input saturations. The model reference is input to state stable (ISS) and driven by the errors between the required control signals and input saturations. The uncertain parameters are dealt with by using linear-in-the-parameters property of robotic dynamics, while unknown input scalings and disturbances are handled by non-regressor based approach. Our design ensures that all the signals in the closed-loop system are bounded, and the tracking error converges to the compact set which depends on the predetermined bounds of the control inputs. Simulation on a planar elbow manipulator with two joints is provided to illustrate the effectiveness of the proposed controller.

  16. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  17. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Winkler, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hancock, E. [Mountain Energy Partnership, Longmont, CO (United States)

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  18. Comparison of input parameters regarding rock mass in analytical solution and numerical modelling

    Science.gov (United States)

    Yasitli, N. E.

    2016-12-01

    Characteristics of stress redistribution around a tunnel excavated in rock are of prime importance for an efficient tunnelling operation and maintaining stability. As it is a well known fact that rock mass properties are the most important factors affecting stability together with in-situ stress field and tunnel geometry. Induced stresses and resultant deformation around a tunnel can be approximated by means of analytical solutions and application of numerical modelling. However, success of these methods depends on assumptions and input parameters which must be representative for the rock mass. However, mechanical properties of intact rock can be found by laboratory testing. The aim of this paper is to demonstrate the importance of proper representation of rock mass properties as input data for analytical solution and numerical modelling. For this purpose, intact rock data were converted into rock mass data by using the Hoek-Brown failure criterion and empirical relations. Stress-deformation analyses together with yield zone thickness determination have been carried out by using analytical solutions and numerical analyses by using FLAC3D programme. Analyses results have indicated that incomplete and incorrect design causes stability and economic problems in the tunnel. For this reason during the tunnel design analytical data and rock mass data should be used together. In addition, this study was carried out to prove theoretically that numerical modelling results should be applied to the tunnel design for the stability and for the economy of the support.

  19. Modelling the soil microclimate: does the spatial or temporal resolution of input parameters matter?

    Directory of Open Access Journals (Sweden)

    Anna Carter

    2016-01-01

    Full Text Available The urgency of predicting future impacts of environmental change on vulnerable populations is advancing the development of spatially explicit habitat models. Continental-scale climate and microclimate layers are now widely available. However, most terrestrial organisms exist within microclimate spaces that are very small, relative to the spatial resolution of those layers. We examined the effects of multi-resolution, multi-extent topographic and climate inputs on the accuracy of hourly soil temperature predictions for a small island generated at a very high spatial resolution (<1 m2 using the mechanistic microclimate model in NicheMapR. Achieving an accuracy comparable to lower-resolution, continental-scale microclimate layers (within about 2–3°C of observed values required the use of daily weather data as well as high resolution topographic layers (elevation, slope, aspect, horizon angles, while inclusion of site-specific soil properties did not markedly improve predictions. Our results suggest that large-extent microclimate layers may not provide accurate estimates of microclimate conditions when the spatial extent of a habitat or other area of interest is similar to or smaller than the spatial resolution of the layers themselves. Thus, effort in sourcing model inputs should be focused on obtaining high resolution terrain data, e.g., via LiDAR or photogrammetry, and local weather information rather than in situ sampling of microclimate characteristics.

  20. Model Predictive Control of Linear Systems over Networks with State and Input Quantizations

    Directory of Open Access Journals (Sweden)

    Xiao-Ming Tang

    2013-01-01

    Full Text Available Although there have been a lot of works about the synthesis and analysis of networked control systems (NCSs with data quantization, most of the results are developed for the case of considering the quantizer only existing in one of the transmission links (either from the sensor to the controller link or from the controller to the actuator link. This paper investigates the synthesis approaches of model predictive control (MPC for NCS subject to data quantizations in both links. Firstly, a novel model to describe the state and input quantizations of the NCS is addressed by extending the sector bound approach. Further, from the new model, two synthesis approaches of MPC are developed: one parameterizes the infinite horizon control moves into a single state feedback law and the other into a free control move followed by the single state feedback law. Finally, the stability results that explicitly consider the satisfaction of input and state constraints are presented. A numerical example is given to illustrate the effectiveness of the proposed MPC.

  1. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    Science.gov (United States)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  2. Unravelling the Sources of Climate Model Errors in Subpolar Gyre Sea-Surface Temperatures

    Science.gov (United States)

    Rubino, Angelo; Zanchettin, Davide

    2017-04-01

    Climate model biases are systematic errors affecting geophysical quantities simulated by coupled general circulation models and Earth system models against observational targets. To this regard, biases affecting sea-surface temperatures (SSTs) are a major concern due to the crucial role of SST in the dynamical coupling between the atmosphere and the ocean, and for the associated variability. Strong SST biases can be detrimental for the overall quality of historical climate simulations, they contribute to uncertainty in simulated features of climate scenarios and complicate initialization and assessment of decadal climate prediction experiments. We use a dynamic linear model developed within a Bayesian hierarchical framework for a probabilistic assessment of spatial and temporal characteristics of SST errors in ensemble climate simulations. In our formulation, the statistical model distinguishes between local and regional errors, further separated into seasonal and non-seasonal components. This contribution, based on a framework developed for the study of biases in the Tropical Atlantic in the frame of the European project PREFACE, focuses on the subpolar gyre region in the North Atlantic Ocean, where climate models are typically affected by a strong cold SST bias. We will use results from an application of our statistical model to an ensemble of hindcasts with the MiKlip prototype system for decadal climate predictions to demonstrate how the decadal evolution of model errors toward the subpolar gyre cold bias is substantially shaped by a seasonal signal. We will demonstrate that such seasonal signal stems from the superposition of propagating large-scale seasonal errors originated in the Labrador Sea and of large-scale as well as mesoscale seasonal errors originated along the Gulf Stream. Based on these results, we will discuss how pronounced distinctive characteristics of the different error components distinguished by our model allow for a clearer connection

  3. A New Method for Identifying the Model Error of Adjustment System

    Institute of Scientific and Technical Information of China (English)

    TAO Benzao; ZHANG Chaoyu

    2005-01-01

    Some theory problems affecting parameter estimation are discussed in this paper. Influence and transformation between errors of stochastic and functional models is pointed out as well. For choosing the best adjustment model, a formula, which is different from the literatures existing methods, for estimating and identifying the model error, is proposed. On the basis of the proposed formula, an effective approach of selecting the best model of adjustment system is given.

  4. Removing Specification Errors from the Usual Formulation of Binary Choice Models

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2016-06-01

    Full Text Available We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.

  5. General expression of double ellipsoidal heat source model and its error analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In order to analyze the maximum power density error with different heat flux distribution parameter values for double ellipsoidal heat source model, a general expression of double ellipsoidal heat source model was derived from Goldak double ellipsoidal heat source model, and the error of maximum power density was analyzed under this foundation. The calculation error of thermal cycling parameters caused by the maximum power density error was compared quantitatively by numerical simulation. The results show that for guarantee the accuracy of welding numerical simulation, it is better to introduce an error correction coefficient into the Goldak double ellipsoidal heat source model expression. And, heat flux distribution parameter should get higher value for the higher power density welding methods.

  6. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  7. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    Science.gov (United States)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  8. Macroscopic model and truncation error of discrete Boltzmann method

    Science.gov (United States)

    Hwang, Yao-Hsin

    2016-10-01

    A derivation procedure to secure the macroscopically equivalent equation and its truncation error for discrete Boltzmann method is proffered in this paper. Essential presumptions of two time scales and a small parameter in the Chapman-Enskog expansion are disposed of in the present formulation. Equilibrium particle distribution function instead of its original non-equilibrium form is chosen as key variable in the derivation route. Taylor series expansion encompassing fundamental algebraic manipulations is adequate to realize the macroscopically differential counterpart. A self-contained and comprehensive practice for the linear one-dimensional convection-diffusion equation is illustrated in details. Numerical validations on the incurred truncation error in one- and two-dimensional cases with various distribution functions are conducted to verify present formulation. As shown in the computational results, excellent agreement between numerical result and theoretical prediction are found in the test problems. Straightforward extensions to more complicated systems including convection-diffusion-reaction, multi-relaxation times in collision operator as well as multi-dimensional Navier-Stokes equations are also exposed in the Appendix to point out its expediency in solving complicated flow problems.

  9. Maneuver Performance Assessment of the Cassini Spacecraft Through Execution-Error Modeling and Analysis

    Science.gov (United States)

    Wagner, Sean

    2014-01-01

    The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.

  10. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  11. Maneuver Performance Assessment of the Cassini Spacecraft Through Execution-Error Modeling and Analysis

    Science.gov (United States)

    Wagner, Sean

    2014-01-01

    The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.

  12. Model of Head-Positioning Error Due to Rotational Vibration of Hard Disk Drives

    Science.gov (United States)

    Matsuda, Yasuhiro; Yamaguchi, Takashi; Saegusa, Shozo; Shimizu, Toshihiko; Hamaguchi, Tetsuya

    An analytical model of head-positioning error due to rotational vibration of a hard disk drive is proposed. The model takes into account the rotational vibration of the base plate caused by the reaction force of the head-positioning actuator, the relationship between the rotational vibration and head-track offset, and the sensitivity function of track-following feedback control. Error calculated by the model agrees well with measured error. It is thus concluded that this model can predict the data transfer performance of a disk drive in read mode.

  13. Least-Squares Based and Gradient Based Iterative Parameter Estimation Algorithms for a Class of Linear-in-Parameters Multiple-Input Single-Output Output Error Systems

    Directory of Open Access Journals (Sweden)

    Cheng Wang

    2014-01-01

    Full Text Available The identification of a class of linear-in-parameters multiple-input single-output systems is considered. By using the iterative search, a least-squares based iterative algorithm and a gradient based iterative algorithm are proposed. A nonlinear example is used to verify the effectiveness of the algorithms, and the simulation results show that the least-squares based iterative algorithm can produce more accurate parameter estimates than the gradient based iterative algorithm.

  14. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  15. Error-preceding brain activity reflects (mal-)adaptive adjustments of cognitive control: a modeling study.

    Science.gov (United States)

    Steinhauser, Marco; Eichele, Heike; Juvodden, Hilde T; Huster, Rene J; Ullsperger, Markus; Eichele, Tom

    2012-01-01

    Errors in choice tasks are preceded by gradual changes in brain activity presumably related to fluctuations in cognitive control that promote the occurrence of errors. In the present paper, we use connectionist modeling to explore the hypothesis that these fluctuations reflect (mal-)adaptive adjustments of cognitive control. We considered ERP data from a study in which the probability of conflict in an Eriksen-flanker task was manipulated in sub-blocks of trials. Errors in these data were preceded by a gradual decline of N2 amplitude. After fitting a connectionist model of conflict adaptation to the data, we analyzed simulated N2 amplitude, simulated response times (RTs), and stimulus history preceding errors in the model, and found that the model produced the same pattern as obtained in the empirical data. Moreover, this pattern is not found in alternative models in which cognitive control varies randomly or in an oscillating manner. Our simulations suggest that the decline of N2 amplitude preceding errors reflects an increasing adaptation of cognitive control to specific task demands, which leads to an error when these task demands change. Taken together, these results provide evidence that error-preceding brain activity can reflect adaptive adjustments rather than unsystematic fluctuations of cognitive control, and therefore, that these errors are actually a consequence of the adaptiveness of human cognition.

  16. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  17. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.

    2013-07-23

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  18. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-09-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  19. International trade inoperability input-output model (IT-IIM): theory and application.

    Science.gov (United States)

    Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y

    2009-01-01

    The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).

  20. Multiregional input-output model for the evaluation of Spanish water flows.

    Science.gov (United States)

    Cazcarro, Ignacio; Duarte, Rosa; Sánchez Chóliz, Julio

    2013-01-01

    We construct a multiregional input-output model for Spain, in order to evaluate the pressures on the water resources, virtual water flows, and water footprints of the regions, and the water impact of trade relationships within Spain and abroad. The study is framed with those interregional input-output models constructed to study water flows and impacts of regions in China, Australia, Mexico, or the UK. To build our database, we reconcile regional IO tables, national and regional accountancy of Spain, trade and water data. Results show an important imbalance between origin of water resources and final destination, with significant water pressures in the South, Mediterranean, and some central regions. The most populated and dynamic regions of Madrid and Barcelona are important drivers of water consumption in Spain. Main virtual water exporters are the South and Central agrarian regions: Andalusia, Castile-La Mancha, Castile-Leon, Aragon, and Extremadura, while the main virtual water importers are the industrialized regions of Madrid, Basque country, and the Mediterranean coast. The paper shows the different location of direct and indirect consumers of water in Spain and how the economic trade and consumption pattern of certain areas has significant impacts on the availability of water resources in other different and often drier regions.

  1. A Water-Withdrawal Input-Output Model of the Indian Economy.

    Science.gov (United States)

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  2. Integrating a calibrated groundwater flow model with error-correcting data-driven models to improve predictions

    Science.gov (United States)

    Demissie, Yonas K.; Valocchi, Albert J.; Minsker, Barbara S.; Bailey, Barbara A.

    2009-01-01

    SummaryPhysically-based groundwater models (PBMs), such as MODFLOW, contain numerous parameters which are usually estimated using statistically-based methods, which assume that the underlying error is white noise. However, because of the practical difficulties of representing all the natural subsurface complexity, numerical simulations are often prone to large uncertainties that can result in both random and systematic model error. The systematic errors can be attributed to conceptual, parameter, and measurement uncertainty, and most often it can be difficult to determine their physical cause. In this paper, we have developed a framework to handle systematic error in physically-based groundwater flow model applications that uses error-correcting data-driven models (DDMs) in a complementary fashion. The data-driven models are separately developed to predict the MODFLOW head prediction errors, which were subsequently used to update the head predictions at existing and proposed observation wells. The framework is evaluated using a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study includes structural, parameter, and measurement uncertainties. In terms of bias and prediction uncertainty range, the complementary modeling framework has shown substantial improvements (up to 64% reduction in RMSE and prediction error ranges) over the original MODFLOW model, in both the calibration and the verification periods. Moreover, the spatial and temporal correlations of the prediction errors are significantly reduced, thus resulting in reduced local biases and structures in the model prediction errors.

  3. An Error Model for the Cirac-Zoller CNOT gate

    CERN Document Server

    Felloni, Sara

    2009-01-01

    In the framework of ion-trap quantum computing, we develop a characterization of experimentally realistic imperfections which may affect the Cirac-Zoller implementation of the CNOT gate. The CNOT operation is performed by applying a protocol of five laser pulses of appropriate frequency and polarization. The laser-pulse protocol exploits auxiliary levels, and its imperfect implementation leads to unitary as well as non-unitary errors affecting the CNOT operation. We provide a characterization of such imperfections, which are physically realistic and have never been considered before to the best of our knowledge. Our characterization shows that imperfect laser pulses unavoidably cause a leak of information from the states which alone should be transformed by the ideal gate, into the ancillary states exploited by the experimental implementation.

  4. Modeling and Error Analysis of a Superconducting Gravity Gradiometer.

    Science.gov (United States)

    1979-08-01

    gradioemetry. The lower bound of "nl ?~ 147 1 mmmin, I tR~ it -ao r -p’.., r- , -. UNCLASSIFIED SECURIT \\, CLASSIFICATIONI 0- THIS PAGE(47hen Dftf...02)[go - " A] " (4.67) The percent error 2 due to scale factor mismatch is 4g O " 1 gi (102( =~ ~-a2) )~2 ’ (a (4.68) since goz > rz i typically...ALP 92960p ( 10) ALP’#2-&LP)c 2*L *AN.1)+4*ALP 2 LOG(ALP).2.8.ALe92.tLOQ(hl4Pv+sALP92o𔄂.ALP) 4G ?.f(t4eA.ALP92.8.AeA iL.)LO(ALP)2AAL,,*2AAR).LA.1

  5. Approach for wideband direction-of-arrival estimation in the presence of array model errors

    Institute of Scientific and Technical Information of China (English)

    Chen Deli; Zhang Cong; Tao Huamin; Lu Huanzhang

    2009-01-01

    The presence of array imperfection and mutual coupling in sensor arrays poses several challenges for development of effective algorithms for the direction-of-arrival (DOA) estimation problem in array processing. A correlation domain wideband DOA estimation algorithm without array calibration is proposed, to deal with these array model errors, using the arbitrary antenna array of omnidirectional elements. By using the matrix operators that have the memory and oblivion characteristics, this algorithm can separate the incident signals effectively. Compared with other typical wideband DOA estimation algorithms based on the subspace theory, this algorithm can get robust DOA estimation with regard to position error, gain-phase error, and mutual coupling, by utilizing a relaxation technique based on signal separation. The signal separation category and the robustness of this algorithm to the array model errors are analyzed and proved. The validity and robustness of this algorithm, in the presence of array model errors, are confirmed by theoretical analysis and simulation results.

  6. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.

  7. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  8. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    Science.gov (United States)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.

  9. Error modeling and tolerance design of a parallel manipulator with full-circle rotation

    Directory of Open Access Journals (Sweden)

    Yanbing Ni

    2016-05-01

    Full Text Available A method for improving the accuracy of a parallel manipulator with full-circle rotation is systematically investigated in this work via kinematic analysis, error modeling, sensitivity analysis, and tolerance allocation. First, a kinematic analysis of the mechanism is made using the space vector chain method. Using the results as a basis, an error model is formulated considering the main error sources. Position and orientation error-mapping models are established by mathematical transformation of the parallelogram structure characteristics. Second, a sensitivity analysis is performed on the geometric error sources. A global sensitivity evaluation index is proposed to evaluate the contribution of the geometric errors to the accuracy of the end-effector. The analysis results provide a theoretical basis for the allocation of tolerances to the parts of the mechanical design. Finally, based on the results of the sensitivity analysis, the design of the tolerances can be solved as a nonlinearly constrained optimization problem. A genetic algorithm is applied to carry out the allocation of the manufacturing tolerances of the parts. Accordingly, the tolerance ranges for nine kinds of geometrical error sources are obtained. The achievements made in this work can also be applied to other similar parallel mechanisms with full-circle rotation to improve error modeling and design accuracy.

  10. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  11. 3D CMM strain-gauge triggering probe error characteristics modeling using fuzzy logic

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Wozniak, A; Fan, Zhun;

    2008-01-01

    The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generat...

  12. 3D CMM Strain-Gauge Triggering Probe Error Characteristics Modeling

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Wozniak, Adam; Fan, Zhun;

    2008-01-01

    The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generat...

  13. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    Science.gov (United States)

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error sco