WorldWideScience

Sample records for fast model-based estimation

  1. Fast LCMV-based Methods for Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll

    2013-01-01

    peaks and require matrix inversions for each point in the search grid. In this paper, we therefore consider fast implementations of LCMV-based fundamental frequency estimators, exploiting the estimators' inherently low displacement rank of the used Toeplitz-like data covariance matrices, using...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...

  2. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  3. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  4. Fast and Statistically Efficient Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2016-01-01

    Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...... a recursive solver. Via benchmarks, we demonstrate that the computation time is reduced by approximately two orders of magnitude. The proposed fast algorithm is available for download online....

  5. Development and validation of a two-dimensional fast-response flood estimation model

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV OF UTAK

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.

  6. A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding

    Directory of Open Access Journals (Sweden)

    Xue Mengfan

    2016-06-01

    Full Text Available X-ray pulsar-based navigation (XPNAV is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint probability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC and nonlinear least squares (NLS estimators, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML estimators.

  7. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  8. Influence function method for fast estimation of BWR core performance

    International Nuclear Information System (INIS)

    Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.

    1993-01-01

    The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)

  9. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Zhao, Jiyun; Ji, Dongxu; Tseng, King Jet

    2017-01-01

    Highlights: •SOC and capacity are dually estimated with online adapted battery model. •Model identification and state dual estimate are fully decoupled. •Multiple timescales are used to improve estimation accuracy and stability. •The proposed method is verified with lab-scale experiments. •The proposed method is applicable to different battery chemistries. -- Abstract: Reliable online estimation of state of charge (SOC) and capacity is critically important for the battery management system (BMS). This paper presents a multi-timescale method for dual estimation of SOC and capacity with an online identified battery model. The model parameter estimator and the dual estimator are fully decoupled and executed with different timescales to improve the model accuracy and stability. Specifically, the model parameters are online adapted with the vector-type recursive least squares (VRLS) to address the different variation rates of them. Based on the online adapted battery model, the Kalman filter (KF)-based SOC estimator and RLS-based capacity estimator are formulated and integrated in the form of dual estimation. Experimental results suggest that the proposed method estimates the model parameters, SOC, and capacity in real time with fast convergence and high accuracy. Experiments on both lithium-ion battery and vanadium redox flow battery (VRB) verify the generality of the proposed method on multiple battery chemistries. The proposed method is also compared with other existing methods on the computational cost to reveal its superiority for practical application.

  10. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    Directory of Open Access Journals (Sweden)

    Manuel Gil

    2014-09-01

    Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  11. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    Science.gov (United States)

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  12. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Xu

    2014-01-01

    Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  13. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    Science.gov (United States)

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  14. Fast Estimation Method of Space-Time Two-Dimensional Positioning Parameters Based on Hadamard Product

    Directory of Open Access Journals (Sweden)

    Haiwen Li

    2018-01-01

    Full Text Available The estimation speed of positioning parameters determines the effectiveness of the positioning system. The time of arrival (TOA and direction of arrival (DOA parameters can be estimated by the space-time two-dimensional multiple signal classification (2D-MUSIC algorithm for array antenna. However, this algorithm needs much time to complete the two-dimensional pseudo spectral peak search, which makes it difficult to apply in practice. Aiming at solving this problem, a fast estimation method of space-time two-dimensional positioning parameters based on Hadamard product is proposed in orthogonal frequency division multiplexing (OFDM system, and the Cramer-Rao bound (CRB is also presented. Firstly, according to the channel frequency domain response vector of each array, the channel frequency domain estimation vector is constructed using the Hadamard product form containing location information. Then, the autocorrelation matrix of the channel response vector for the extended array element in frequency domain and the noise subspace are calculated successively. Finally, by combining the closed-form solution and parameter pairing, the fast joint estimation for time delay and arrival direction is accomplished. The theoretical analysis and simulation results show that the proposed algorithm can significantly reduce the computational complexity and guarantee that the estimation accuracy is not only better than estimating signal parameters via rotational invariance techniques (ESPRIT algorithm and 2D matrix pencil (MP algorithm but also close to 2D-MUSIC algorithm. Moreover, the proposed algorithm also has certain adaptability to multipath environment and effectively improves the ability of fast acquisition of location parameters.

  15. Fast, accurate, and robust frequency offset estimation based on modified adaptive Kalman filter in coherent optical communication system

    Science.gov (United States)

    Yang, Yanfu; Xiang, Qian; Zhang, Qun; Zhou, Zhongqing; Jiang, Wen; He, Qianwen; Yao, Yong

    2017-09-01

    We propose a joint estimation scheme for fast, accurate, and robust frequency offset (FO) estimation along with phase estimation based on modified adaptive Kalman filter (MAKF). The scheme consists of three key modules: extend Kalman filter (EKF), lock detector, and FO cycle slip recovery. The EKF module estimates time-varying phase induced by both FO and laser phase noise. The lock detector module makes decision between acquisition mode and tracking mode and consequently sets the EKF tuning parameter in an adaptive manner. The third module can detect possible cycle slip in the case of large FO and make proper correction. Based on the simulation and experimental results, the proposed MAKF has shown excellent estimation performance featuring high accuracy, fast convergence, as well as the capability of cycle slip recovery.

  16. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    Science.gov (United States)

    Perez, Hector Eduardo

    notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro

  17. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    Science.gov (United States)

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Design of Model-based Controller with Disturbance Estimation in Steer-by-wire System

    Directory of Open Access Journals (Sweden)

    Jung Sanghun

    2018-01-01

    Full Text Available The steer-by-wire system is a next generation steering control technology that has been actively studied because it has many advantages such as fast response, space efficiency due to removal of redundant mechanical elements, and high connectivity with vehicle chassis control, such as active steering. Steer-by-wire system has disturbance composed of tire friction torque and self-aligning torque. These disturbances vary widely due to the weight or friction coefficient change. Therefore, disturbance compensation logic is strongly required to obtain desired performance. This paper proposes model-based controller with disturbance compensation to achieve the robust control performance. Targeted steer-by-wire system is identified through the experiment and system identification method. Moreover, model-based controller is designed using the identified plant model. Disturbance of targeted steer-by-wire is estimated using disturbance observer(DOB, and compensate the estimated disturbance into control input. Experiment of various scenarios are conducted to validate the robust performance of proposed model-based controller.

  19. Analytical model for fast-shock ignition

    International Nuclear Information System (INIS)

    Ghasemi, S. A.; Farahbod, A. H.; Sobhanian, S.

    2014-01-01

    A model and its improvements are introduced for a recently proposed approach to inertial confinement fusion, called fast-shock ignition (FSI). The analysis is based upon the gain models of fast ignition, shock ignition and considerations for the fast electrons penetration into the pre-compressed fuel to examine the formation of an effective central hot spot. Calculations of fast electrons penetration into the dense fuel show that if the initial electron kinetic energy is of the order ∼4.5 MeV, the electrons effectively reach the central part of the fuel. To evaluate more realistically the performance of FSI approach, we have used a quasi-two temperature electron energy distribution function of Strozzi (2012) and fast ignitor energy formula of Bellei (2013) that are consistent with 3D PIC simulations for different values of fast ignitor laser wavelength and coupling efficiency. The general advantages of fast-shock ignition in comparison with the shock ignition can be estimated to be better than 1.3 and it is seen that the best results can be obtained for the fuel mass around 1.5 mg, fast ignitor laser wavelength ∼0.3  micron and the shock ignitor energy weight factor about 0.25

  20. Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.

    2011-01-01

    Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.

  1. Detection-Guided Fast Affine Projection Channel Estimator for Speech Applications

    Directory of Open Access Journals (Sweden)

    Yan Wu Jennifer

    2007-04-01

    Full Text Available In various adaptive estimation applications, such as acoustic echo cancellation within teleconferencing systems, the input signal is a highly correlated speech. This, in general, leads to extremely slow convergence of the NLMS adaptive FIR estimator. As a result, for such applications, the affine projection algorithm (APA or the low-complexity version, the fast affine projection (FAP algorithm, is commonly employed instead of the NLMS algorithm. In such applications, the signal propagation channel may have a relatively low-dimensional impulse response structure, that is, the number m of active or significant taps within the (discrete-time modelled channel impulse response is much less than the overall tap length n of the channel impulse response. For such cases, we investigate the inclusion of an active-parameter detection-guided concept within the fast affine projection FIR channel estimator. Simulation results indicate that the proposed detection-guided fast affine projection channel estimator has improved convergence speed and has lead to better steady-state performance than the standard fast affine projection channel estimator, especially in the important case of highly correlated speech input signals.

  2. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  3. Jobs and Economic Development Impact (JEDI) User Reference Guide: Fast Pyrolysis Biorefinery Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yimin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Goldberg, Marshall [MRG and Associates, Nevada City, CA (United States)

    2015-02-01

    This guide -- the JEDI Fast Pyrolysis Biorefinery Model User Reference Guide -- was developed to assist users in operating and understanding the JEDI Fast Pyrolysis Biorefinery Model. The guide provides information on the model's underlying methodology, as well as the parameters and data sources used to develop the cost data utilized in the model. This guide also provides basic instruction on model add-in features and a discussion of how the results should be interpreted. Based on project-specific inputs from the user, the JEDI Fast Pyrolysis Biorefinery Model estimates local (e.g., county- or state-level) job creation, earnings, and output from total economic activity for a given fast pyrolysis biorefinery. These estimates include the direct, indirect and induced economic impacts to the local economy associated with the construction and operation phases of biorefinery projects.Local revenue and supply chain impacts as well as induced impacts are estimated using economic multipliers derived from the IMPLAN software program. By determining the local economic impacts and job creation for a proposed biorefinery, the JEDI Fast Pyrolysis Biorefinery Model can be used to field questions about the added value biorefineries might bring to a local community.

  4. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  5. Fast human pose estimation using 3D Zernike descriptors

    Science.gov (United States)

    Berjón, Daniel; Morán, Francisco

    2012-03-01

    Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.

  6. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  7. Fast and Accurate Video PQoS Estimation over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Emanuele Viterbo

    2008-06-01

    Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.

  8. Coastal Amplification Laws for the French Tsunami Warning Center: Numerical Modeling and Fast Estimate of Tsunami Wave Heights Along the French Riviera

    Science.gov (United States)

    Gailler, A.; Hébert, H.; Schindelé, F.; Reymond, D.

    2018-04-01

    Tsunami modeling tools in the French tsunami Warning Center operational context provide rapidly derived warning levels with a dimensionless variable at basin scale. A new forecast method based on coastal amplification laws has been tested to estimate the tsunami onshore height, with a focus on the French Riviera test-site (Nice area). This fast prediction tool provides a coastal tsunami height distribution, calculated from the numerical simulation of the deep ocean tsunami amplitude and using a transfer function derived from the Green's law. Due to a lack of tsunami observations in the western Mediterranean basin, coastal amplification parameters are here defined regarding high resolution nested grids simulations. The preliminary results for the Nice test site on the basis of nine historical and synthetic sources show a good agreement with the time-consuming high resolution modeling: the linear approximation is obtained within 1 min in general and provides estimates within a factor of two in amplitude, although the resonance effects in harbors and bays are not reproduced. In Nice harbor especially, variation in tsunami amplitude is something that cannot be really assessed because of the magnitude range and maximum energy azimuth of possible events to account for. However, this method is well suited for a fast first estimate of the coastal tsunami threat forecast.

  9. New Software for the Fast Estimation of Population Recombination Rates (FastEPRR in the Genomic Era

    Directory of Open Access Journals (Sweden)

    Feng Gao

    2016-06-01

    Full Text Available Genetic recombination is a very important evolutionary mechanism that mixes parental haplotypes and produces new raw material for organismal evolution. As a result, information on recombination rates is critical for biological research. In this paper, we introduce a new extremely fast open-source software package (FastEPRR that uses machine learning to estimate recombination rate ρ (=4Ner from intraspecific DNA polymorphism data. When ρ>10 and the number of sampled diploid individuals is large enough (≥50, the variance of ρFastEPRR remains slightly smaller than that of ρLDhat. The new estimate ρcomb (calculated by averaging ρFastEPRR and ρLDhat has the smallest variance of all cases. When estimating ρFastEPRR, the finite-site model was employed to analyze cases with a high rate of recurrent mutations, and an additional method is proposed to consider the effect of variable recombination rates within windows. Simulations encompassing a wide range of parameters demonstrate that different evolutionary factors, such as demography and selection, may not increase the false positive rate of recombination hotspots. Overall, accuracy of FastEPRR is similar to the well-known method, LDhat, but requires far less computation time. Genetic maps for each human population (YRI, CEU, and CHB extracted from the 1000 Genomes OMNI data set were obtained in less than 3 d using just a single CPU core. The Pearson Pairwise correlation coefficient between the ρFastEPRR and ρLDhat maps is very high, ranging between 0.929 and 0.987 at a 5-Mb scale. Considering that sample sizes for these kinds of data are increasing dramatically with advances in next-generation sequencing technologies, FastEPRR (freely available at http://www.picb.ac.cn/evolgen/ is expected to become a widely used tool for establishing genetic maps and studying recombination hotspots in the population genomic era.

  10. Total decay heat estimates in a proto-type fast reactor

    International Nuclear Information System (INIS)

    Sridharan, M.S.

    2003-01-01

    Full text: In this paper, total decay heat values generated in a proto-type fast reactor are estimated. These values are compared with those of certain fast reactors. Simple analytical fits are also obtained for these values which can serve as a handy and convenient tool in engineering design studies. These decay heat values taken as their ratio to the nominal operating power are, in general, applicable to any typical plutonium based fast reactor and are useful inputs to the design of decay-heat removal systems

  11. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....

  12. Enhanced online model identification and state of charge estimation for lithium-ion battery with a FBCRLS based observer

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Meng, Shujuan; Xiong, Binyu; Ji, Dongxu; Tseng, King Jet

    2016-01-01

    Highlights: • Integrated online model identification and SOC estimate is explored. • Noise variances are online estimated in a data-driven way. • Identification bias caused by noise corruption is attenuated. • SOC is online estimated with high accuracy and fast convergence. • Algorithm comparison shows the superiority of proposed method. - Abstract: State of charge (SOC) estimators with online identified battery model have proven to have high accuracy and better robustness due to the timely adaption of time varying model parameters. In this paper, we show that the common methods for model identification are intrinsically biased if both the current and voltage sensors are corrupted with noises. The uncertainties in battery model further degrade the accuracy and robustness of SOC estimate. To address this problem, this paper proposes a novel technique which integrates the Frisch scheme based bias compensating recursive least squares (FBCRLS) with a SOC observer for enhanced model identification and SOC estimate. The proposed method online estimates the noise statistics and compensates the noise effect so that the model parameters can be extracted without bias. The SOC is further estimated in real time with the online updated and unbiased battery model. Simulation and experimental studies show that the proposed FBCRLS based observer effectively attenuates the bias on model identification caused by noise contamination and as a consequence provides more reliable estimate on SOC. The proposed method is also compared with other existing methods to highlight its superiority in terms of accuracy and convergence speed.

  13. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  14. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  15. Rapid Estimation Method for State of Charge of Lithium-Ion Battery Based on Fractional Continual Variable Order Model

    Directory of Open Access Journals (Sweden)

    Xin Lu

    2018-03-01

    Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.

  16. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  17. A Novel Data-Driven Fast Capacity Estimation of Spent Electric Vehicle Lithium-ion Batteries

    Directory of Open Access Journals (Sweden)

    Caiping Zhang

    2014-12-01

    Full Text Available Fast capacity estimation is a key enabling technique for second-life of lithium-ion batteries due to the hard work involved in determining the capacity of a large number of used electric vehicle (EV batteries. This paper tries to make three contributions to the existing literature through a robust and advanced algorithm: (1 a three layer back propagation artificial neural network (BP ANN model is developed to estimate the battery capacity. The model employs internal resistance expressing the battery’s kinetics as the model input, which can realize fast capacity estimation; (2 an estimation error model is established to investigate the relationship between the robustness coefficient and regression coefficient. It is revealed that commonly used ANN capacity estimation algorithm is flawed in providing robustness of parameter measurement uncertainties; (3 the law of large numbers is used as the basis for a proposed robust estimation approach, which optimally balances the relationship between estimation accuracy and disturbance rejection. An optimal range of the threshold for robustness coefficient is also discussed and proposed. Experimental results demonstrate the efficacy and the robustness of the BP ANN model together with the proposed identification approach, which can provide an important basis for large scale applications of second-life of batteries.

  18. A Fast DOA Estimation Algorithm Based on Polarization MUSIC

    Directory of Open Access Journals (Sweden)

    R. Guo

    2015-04-01

    Full Text Available A fast DOA estimation algorithm developed from MUSIC, which also benefits from the processing of the signals' polarization information, is presented. Besides performance enhancement in precision and resolution, the proposed algorithm can be exerted on various forms of polarization sensitive arrays, without specific requirement on the array's pattern. Depending on the continuity property of the space spectrum, a huge amount of computation incurred in the calculation of 4-D space spectrum is averted. Performance and computation complexity analysis of the proposed algorithm is discussed and the simulation results are presented. Compared with conventional MUSIC, it is indicated that the proposed algorithm has considerable advantage in aspects of precision and resolution, with a low computation complexity proportional to a conventional 2-D MUSIC.

  19. Econometric modelling of certain nuclear power systems based on thermal and fast breeder reactors

    International Nuclear Information System (INIS)

    Pavelescu, M.; Pioaru, C.; Ursu, I.

    1988-01-01

    Certain known economic analysis models for a LMFBR fast breeder and CANDU thermal solitary reactors are presented, based on the concepts of discounting and levelization. These models are subsequently utilized as a basis for establishing an original model for the econometric analysis of certain thermal reactor systems or/and fast breeder reactors. Case studies are subsequently conducted with the systems: 1-CANDU, 2-LMFBR, 3-CANDU + LMFBR which enables us to draw certain interesting conclusions for a long range nuclear power policy. (author)

  20. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  1. Fast simulation of transport and adaptive permeability estimation in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Berre, Inga

    2005-07-01

    The focus of the thesis is twofold: Both fast simulation of transport in porous media and adaptive estimation of permeability are considered. A short introduction that motivates the work on these topics is given in Chapter 1. In Chapter 2, the governing equations for one- and two-phase flow in porous media are presented. Overall numerical solution strategies for the two-phase flow model are also discussed briefly. The concepts of streamlines and time-of-flight are introduced in Chapter 3. Methods for computing streamlines and time-of-flight are also presented in this chapter. Subsequently, in Chapters 4 and 5, the focus is on simulation of transport in a time-of-flight perspective. In Chapter 4, transport of fluids along streamlines is considered. Chapter 5 introduces a different viewpoint based on the evolution of isocontours of the fluid saturation. While the first chapters focus on the forward problem, which consists in solving a mathematical model given the reservoir parameters, Chapters 6, 7 and 8 are devoted to the inverse problem of permeability estimation. An introduction to the problem of identifying spatial variability in reservoir permeability by inversion of dynamic production data is given in Chapter 6. In Chapter 7, adaptive multiscale strategies for permeability estimation are discussed. Subsequently, Chapter 8 presents a level-set approach for improving piecewise constant permeability representations. Finally, Chapter 9 summarizes the results obtained in the thesis; in addition, the chapter gives some recommendations and suggests directions for future work. Part II In Part II, the following papers are included in the order they were completed: Paper A: A Streamline Front Tracking Method for Two- and Three-Phase Flow Including Capillary Forces. I. Berre, H. K. Dahle, K. H. Karlsen, and H. F. Nordhaug. In Fluid flow and transport in porous media: mathematical and numerical treatment (South Hadley, MA, 2001), volume 295 of Contemp. Math., pages 49

  2. FPSoC-Based Architecture for a Fast Motion Estimation Algorithm in H.264/AVC

    Directory of Open Access Journals (Sweden)

    Obianuju Ndili

    2009-01-01

    Full Text Available There is an increasing need for high quality video on low power, portable devices. Possible target applications range from entertainment and personal communications to security and health care. While H.264/AVC answers the need for high quality video at lower bit rates, it is significantly more complex than previous coding standards and thus results in greater power consumption in practical implementations. In particular, motion estimation (ME, in H.264/AVC consumes the largest power in an H.264/AVC encoder. It is therefore critical to speed-up integer ME in H.264/AVC via fast motion estimation (FME algorithms and hardware acceleration. In this paper, we present our hardware oriented modifications to a hybrid FME algorithm, our architecture based on the modified algorithm, and our implementation and prototype on a PowerPC-based Field Programmable System on Chip (FPSoC. Our results show that the modified hybrid FME algorithm on average, outperforms previous state-of-the-art FME algorithms, while its losses when compared with FSME, in terms of PSNR performance and computation time, are insignificant. We show that although our implementation platform is FPGA-based, our implementation results compare favourably with previous architectures implemented on ASICs. Finally we also show an improvement over some existing architectures implemented on FPGAs.

  3. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  4. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  5. SAD PROCESSOR FOR MULTIPLE MACROBLOCK MATCHING IN FAST SEARCH VIDEO MOTION ESTIMATION

    Directory of Open Access Journals (Sweden)

    Nehal N. Shah

    2015-02-01

    Full Text Available Motion estimation is a very important but computationally complex task in video coding. Process of determining motion vectors based on the temporal correlation of consecutive frame is used for video compression. In order to reduce the computational complexity of motion estimation and maintain the quality of encoding during motion compensation, different fast search techniques are available. These block based motion estimation algorithms use the sum of absolute difference (SAD between corresponding macroblock in current frame and all the candidate macroblocks in the reference frame to identify best match. Existing implementations can perform SAD between two blocks using sequential or pipeline approach but performing multi operand SAD in single clock cycle with optimized recourses is state of art. In this paper various parallel architectures for computation of the fixed block size SAD is evaluated and fast parallel SAD architecture is proposed with optimized resources. Further SAD processor is described with 9 processing elements which can be configured for any existing fast search block matching algorithm. Proposed SAD processor consumes 7% fewer adders compared to existing implementation for one processing elements. Using nine PE it can process 84 HD frames per second in worse case which is good outcome for real time implementation. In average case architecture process 325 HD frames per second.

  6. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  7. Teletactile System Based on Mechanical Properties Estimation

    Directory of Open Access Journals (Sweden)

    Mauro M. Sette

    2011-01-01

    Full Text Available Tactile feedback is a major missing feature in minimally invasive procedures; it is an essential means of diagnosis and orientation during surgical procedures. Previous works have presented a remote palpation feedback system based on the coupling between a pressure sensor and a general haptic interface. Here a new approach is presented based on the direct estimation of the tissue mechanical properties and finally their presentation to the operator by means of a haptic interface. The approach presents different technical difficulties and some solutions are proposed: the implementation of a fast Young’s modulus estimation algorithm, the implementation of a real time finite element model, and finally the implementation of a stiffness estimation approach in order to guarantee the system’s stability. The work is concluded with an experimental evaluation of the whole system.

  8. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  9. Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation

    NARCIS (Netherlands)

    Segers, J.; van den Akker, R.; Werker, B.J.M.

    2014-01-01

    We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of

  10. Using satellite-based rainfall estimates for streamflow modelling: Bagmati Basin

    Science.gov (United States)

    Shrestha, M.S.; Artan, Guleid A.; Bajracharya, S.R.; Sharma, R. R.

    2008-01-01

    In this study, we have described a hydrologic modelling system that uses satellite-based rainfall estimates and weather forecast data for the Bagmati River Basin of Nepal. The hydrologic model described is the US Geological Survey (USGS) Geospatial Stream Flow Model (GeoSFM). The GeoSFM is a spatially semidistributed, physically based hydrologic model. We have used the GeoSFM to estimate the streamflow of the Bagmati Basin at Pandhera Dovan hydrometric station. To determine the hydrologic connectivity, we have used the USGS Hydro1k DEM dataset. The model was forced by daily estimates of rainfall and evapotranspiration derived from weather model data. The rainfall estimates used for the modelling are those produced by the National Oceanic and Atmospheric Administration Climate Prediction Centre and observed at ground rain gauge stations. The model parameters were estimated from globally available soil and land cover datasets – the Digital Soil Map of the World by FAO and the USGS Global Land Cover dataset. The model predicted the daily streamflow at Pandhera Dovan gauging station. The comparison of the simulated and observed flows at Pandhera Dovan showed that the GeoSFM model performed well in simulating the flows of the Bagmati Basin.

  11. Online state of charge and model parameter co-estimation based on a novel multi-timescale estimator for vanadium redox flow battery

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Lim, Tuti Mariana; Skyllas-Kazacos, Maria; Wai, Nyunt; Tseng, King Jet

    2016-01-01

    Highlights: • Battery model parameters and SOC co-estimation is investigated. • The model parameters and OCV are decoupled and estimated independently. • Multiple timescales are adopted to improve precision and stability. • SOC is online estimated without using the open-circuit cell. • The method is robust to aging levels, flow rates, and battery chemistries. - Abstract: A key function of battery management system (BMS) is to provide accurate information of the state of charge (SOC) in real time, and this depends directly on the precise model parameterization. In this paper, a novel multi-timescale estimator is proposed to estimate the model parameters and SOC for vanadium redox flow battery (VRB) in real time. The model parameters and OCV are decoupled and estimated independently, effectively avoiding the possibility of cross interference between them. The analysis of model sensitivity, stability, and precision suggests the necessity of adopting different timescales for each estimator independently. Experiments are conducted to assess the performance of the proposed method. Results reveal that the model parameters are online adapted accurately thus the periodical calibration on them can be avoided. The online estimated terminal voltage and SOC are both benchmarked with the reference values. The proposed multi-timescale estimator has the merits of fast convergence, high precision, and good robustness against the initialization uncertainty, aging states, flow rates, and also battery chemistries.

  12. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under modelbased framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  13. Fast three-material modeling with triple arch projection for electronic cleansing in CTC.

    Science.gov (United States)

    Lee, Hyunna; Lee, Jeongjin; Kim, Bohyoung; Kim, Se Hyung; Shin, Yeong-Gil

    2014-07-01

    In this paper, we propose a fast three-material modeling for electronic cleansing (EC) in computed tomographic colonography. Using a triple arch projection, our three-material modeling provides a very quick estimate of the three-material fractions to remove ridge-shaped artifacts at the T-junctions where air, soft-tissue (ST), and tagged residues (TRs) meet simultaneously. In our approach, colonic components including air, TR, the layer between air and TR, the layer between ST and TR (L(ST/TR)), and the T-junction are first segmented. Subsequently, the material fraction of ST for each voxel in L(ST/TR) and the T-junction is determined. Two-material fractions of the voxels in L(ST/TR) are derived based on a two-material transition model. On the other hand, three-material fractions of the voxels in the T-junction are estimated based on our fast three-material modeling with triple arch projection. Finally, the CT density value of each voxel is updated based on our fold-preserving reconstruction model. Experimental results using ten clinical datasets demonstrate that the proposed three-material modeling successfully removed the T-junction artifacts and clearly reconstructed the whole colon surface while preserving the submerged folds well. Furthermore, compared with the previous three-material transition model, the proposed three-material modeling resulted in about a five-fold increase in speed with the better preservation of submerged folds and the similar level of cleansing quality in T-junction regions.

  14. Evaluation of a morphing based method to estimate muscle attachment sites of the lower extremity

    NARCIS (Netherlands)

    Pellikaan, P.; van der Krogt, Marjolein; Carbone, Vincenzo; Fluit, René; Vigneron, L.M.; van Deun, J.; Verdonschot, Nicolaas Jacobus Joseph; Koopman, Hubertus F.J.M.

    2014-01-01

    To generate subject-specific musculoskeletal models for clinical use, the location of muscle attachment sites needs to be estimated with accurate, fast and preferably automated tools. For this purpose, an automatic method was used to estimate the muscle attachment sites of the lower extremity, based

  15. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    Science.gov (United States)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  16. Fast Pedestrian Recognition Based on Multisensor Fusion

    Directory of Open Access Journals (Sweden)

    Hongyu Hu

    2012-01-01

    Full Text Available A fast pedestrian recognition algorithm based on multisensor fusion is presented in this paper. Firstly, potential pedestrian locations are estimated by laser radar scanning in the world coordinates, and then their corresponding candidate regions in the image are located by camera calibration and the perspective mapping model. For avoiding time consuming in the training and recognition process caused by large numbers of feature vector dimensions, region of interest-based integral histograms of oriented gradients (ROI-IHOG feature extraction method is proposed later. A support vector machine (SVM classifier is trained by a novel pedestrian sample dataset which adapt to the urban road environment for online recognition. Finally, we test the validity of the proposed approach with several video sequences from realistic urban road scenarios. Reliable and timewise performances are shown based on our multisensor fusing method.

  17. A model-based approach to estimating forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2006-01-01

    A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...

  18. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  19. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  20. Non-destructive fast charging algorithm of lithium-ion batteries based on the control-oriented electrochemical model

    International Nuclear Information System (INIS)

    Chu, Zhengyu; Feng, Xuning; Lu, Languang; Li, Jianqiu; Han, Xuebing; Ouyang, Minggao

    2017-01-01

    Highlights: •A novel non-destructive fast charging algorithm of lithium-ion batteries is proposed. •A close-loop observer of lithium deposition status is constructed based on the SP2D model. •The charging current is modified online using the feedback of the lithium deposition status. •The algorithm can shorten the charging time and can be used for charging from different initial SOCs. •The post-mortem observation and degradation tests show that no lithium deposition occurs during fast charging. -- Abstract: Fast charging is critical for the application of lithium-ion batteries in electric vehicles. Conventional fast charging algorithms may shorten the cycle life of lithium-ion batteries and induce safety problems, such as internal short circuit caused by lithium deposition at the negative electrode. In this paper, a novel, non-destructive model-based fast charging algorithm is proposed. The fast charging algorithm is composed of two closed loops. The first loop includes an anode over-potential observer that can observe the status of lithium deposition online, whereas the second loop includes a feedback structure that can modify the current based on the observed status of lithium deposition. The charging algorithm enhances the charging current to maintain the observed anode over-potential near the preset threshold potential. Therefore, the fast charging algorithm can decrease the charging time while protecting the health of the battery. The fast charging algorithm is validated on a commercial large-format nickel cobalt manganese/graphite cell. The results showed that 96.8% of the battery capacity can be charged within 52 min. The post-mortem observation of the surface of the negative electrode and degradation tests revealed that the fast charging algorithm proposed here protected the battery from lithium deposition.

  1. A new preprocessing parameter estimation based on geodesic active contour model for automatic vestibular neuritis diagnosis.

    Science.gov (United States)

    Ben Slama, Amine; Mouelhi, Aymen; Sahli, Hanene; Manoubi, Sondes; Mbarek, Chiraz; Trabelsi, Hedi; Fnaiech, Farhat; Sayadi, Mounir

    2017-07-01

    The diagnostic of the vestibular neuritis (VN) presents many difficulties to traditional assessment methods This paper deals with a fully automatic VN diagnostic system based on nystagmus parameter estimation using a pupil detection algorithm. A geodesic active contour model is implemented to find an accurate segmentation region of the pupil. Hence, the novelty of the proposed algorithm is to speed up the standard segmentation by using a specific mask located on the region of interest. This allows a drastically computing time reduction and a great performance and accuracy of the obtained results. After using this fast segmentation algorithm, the obtained estimated parameters are represented in temporal and frequency settings. A useful principal component analysis (PCA) selection procedure is then applied to obtain a reduced number of estimated parameters which are used to train a multi neural network (MNN). Experimental results on 90 eye movement videos show the effectiveness and the accuracy of the proposed estimation algorithm versus previous work. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  3. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  4. A Fast LMMSE Channel Estimation Method for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Zhou Wen

    2009-01-01

    Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.

  5. Consumers' estimation of calorie content at fast food restaurants: cross sectional observational study.

    Science.gov (United States)

    Block, Jason P; Condon, Suzanne K; Kleinman, Ken; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl; Gillman, Matthew W

    2013-05-23

    To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Cross sectional study of repeated visits to fast food restaurant chains. 89 fast food restaurants in four cities in New England, United States: McDonald's, Burger King, Subway, Wendy's, KFC, Dunkin' Donuts. 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1178 adolescents visiting restaurants after school or at lunchtime in 2010 and 2011. Estimated calorie content of purchased meals. Among adults, adolescents, and school age children, the mean actual calorie content of meals was 836 calories (SD 465), 756 calories (SD 455), and 733 calories (SD 359), respectively. A calorie is equivalent to 4.18 kJ. Compared with the actual figures, participants underestimated calorie content by means of 175 calories (95% confidence interval 145 to 205), 259 calories (227 to 291), and 175 calories (108 to 242), respectively. In multivariable linear regression models, underestimation of calorie content increased substantially as the actual meal calorie content increased. Adults and adolescents eating at Subway estimated 20% and 25% lower calorie content than McDonald's diners (relative change 0.80, 95% confidence interval 0.66 to 0.96; 0.75, 0.57 to 0.99). People eating at fast food restaurants underestimate the calorie content of meals, especially large meals. Education of consumers through calorie menu labeling and other outreach efforts might reduce the large degree of underestimation.

  6. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  7. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  8. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  9. Fast and Robust Nanocellulose Width Estimation Using Turbidimetry.

    Science.gov (United States)

    Shimizu, Michiko; Saito, Tsuguyuki; Nishiyama, Yoshiharu; Iwamoto, Shinichiro; Yano, Hiroyuki; Isogai, Akira; Endo, Takashi

    2016-10-01

    The dimensions of nanocelluloses are important factors in controlling their material properties. The present study reports a fast and robust method for estimating the widths of individual nanocellulose particles based on the turbidities of their water dispersions. Seven types of nanocellulose, including short and rigid cellulose nanocrystals and long and flexible cellulose nanofibers, are prepared via different processes. Their widths are calculated from the respective turbidity plots of their water dispersions, based on the theory of light scattering by thin and long particles. The turbidity-derived widths of the seven nanocelluloses range from 2 to 10 nm, and show good correlations with the thicknesses of nanocellulose particles spread on flat mica surfaces determined using atomic force microscopy. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    Science.gov (United States)

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-09-04

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.

  11. Fast focus estimation using frequency analysis in digital holography.

    Science.gov (United States)

    Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung

    2014-11-17

    A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.

  12. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  13. Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Zhongyue Zou

    2014-08-01

    Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.

  14. Estimation of Compaction Parameters Based on Soil Classification

    Science.gov (United States)

    Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.

    2018-02-01

    Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.

  15. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    Science.gov (United States)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  16. Consumers’ estimation of calorie content at fast food restaurants: cross sectional observational study

    Science.gov (United States)

    Condon, Suzanne K; Kleinman, Ken; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl; Gillman, Matthew W

    2013-01-01

    Objective To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Design Cross sectional study of repeated visits to fast food restaurant chains. Setting 89 fast food restaurants in four cities in New England, United States: McDonald’s, Burger King, Subway, Wendy’s, KFC, Dunkin’ Donuts. Participants 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1178 adolescents visiting restaurants after school or at lunchtime in 2010 and 2011. Main outcome measure Estimated calorie content of purchased meals. Results Among adults, adolescents, and school age children, the mean actual calorie content of meals was 836 calories (SD 465), 756 calories (SD 455), and 733 calories (SD 359), respectively. A calorie is equivalent to 4.18 kJ. Compared with the actual figures, participants underestimated calorie content by means of 175 calories (95% confidence interval 145 to 205), 259 calories (227 to 291), and 175 calories (108 to 242), respectively. In multivariable linear regression models, underestimation of calorie content increased substantially as the actual meal calorie content increased. Adults and adolescents eating at Subway estimated 20% and 25% lower calorie content than McDonald’s diners (relative change 0.80, 95% confidence interval 0.66 to 0.96; 0.75, 0.57 to 0.99). Conclusions People eating at fast food restaurants underestimate the calorie content of meals, especially large meals. Education of consumers through calorie menu labeling and other outreach efforts might reduce the large degree of underestimation. PMID:23704170

  17. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  18. Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model

    Science.gov (United States)

    Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man

    2017-03-01

    Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.

  19. Precision and accuracy in smFRET based structural studies—A benchmark study of the Fast-Nano-Positioning System

    Science.gov (United States)

    Nagy, Julia; Eilert, Tobias; Michaelis, Jens

    2018-03-01

    Modern hybrid structural analysis methods have opened new possibilities to analyze and resolve flexible protein complexes where conventional crystallographic methods have reached their limits. Here, the Fast-Nano-Positioning System (Fast-NPS), a Bayesian parameter estimation-based analysis method and software, is an interesting method since it allows for the localization of unknown fluorescent dye molecules attached to macromolecular complexes based on single-molecule Förster resonance energy transfer (smFRET) measurements. However, the precision, accuracy, and reliability of structural models derived from results based on such complex calculation schemes are oftentimes difficult to evaluate. Therefore, we present two proof-of-principle benchmark studies where we use smFRET data to localize supposedly unknown positions on a DNA as well as on a protein-nucleic acid complex. Since we use complexes where structural information is available, we can compare Fast-NPS localization to the existing structural data. In particular, we compare different dye models and discuss how both accuracy and precision can be optimized.

  20. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  1. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B [Instrument Department, College of Mechatronics Engineering and Automation, National University of Defense Technology, ChangSha, Hunan, 410073 (China)

    2006-10-15

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily.

  2. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    International Nuclear Information System (INIS)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B

    2006-01-01

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily

  3. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  4. Optimal Orientation Planning and Control Deviation Estimation on FAST Cable-Driven Parallel Robot

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-03-01

    Full Text Available The paper is devoted theoretically to the optimal orientation planning and control deviation estimation of FAST cable-driven parallel robot. Regarding the robot characteristics, the solutions are obtained from two constrained optimizations, both of which are based on the equilibrium of the cabin and the attention on force allocation among 6 cable tensions. A kind of control algorithm is proposed based on the position and force feedbacks. The analysis proves that the orientation control depends on force feedback and the optimal tension solution corresponding to the planned orientation. Finally, the estimation of orientation deviation is given under the limit range of tension errors.

  5. Optimal multi-agent path planning for fast inverse modeling in UAV-based flood sensing applications

    KAUST Repository

    Abdelkader, Mohamed

    2014-05-01

    Floods are the most common natural disasters, causing thousands of casualties every year in the world. In particular, flash flood events are particularly deadly because of the short timescales on which they occur. Unmanned air vehicles equipped with mobile microsensors could be capable of sensing flash floods in real time, saving lives and greatly improving the efficiency of the emergency response. However, of the main issues arising with sensing floods is the difficulty of planning the path of the sensing agents in advance so as to obtain meaningful data as fast as possible. In this particle, we present a fast numerical scheme to quickly compute the trajectories of a set of UAVs in order to maximize the accuracy of model parameter estimation over a time horizon. Simulation results are presented, a preliminary testbed is briefly described, and future research directions and problems are discussed. © 2014 IEEE.

  6. Teach it Yourself - Fast Modeling of Industrial Objects for 6D Pose Estimation

    DEFF Research Database (Denmark)

    Sølund, Thomas; Rajeeth Savarimuthu, Thiusius; Glent Buch, Anders

    2015-01-01

    In this paper, we present a vision system that allows a human to create new 3D models of novel industrial parts by placing the part in two different positions in the scene. The two shot modeling framework generates models with a precision that allows the model to be used for 6D pose estimation wi....... In addition, the models are applied in a pose estimation application, evaluated with 37 different scenes with 61 unique object poses. The pose estimation results show a mean translation error on 4.97 mm and a mean rotation error on 3.38 degrees....

  7. Predictive Modeling of Fast-Curing Thermosets in Nozzle-Based Extrusion

    Science.gov (United States)

    Xie, Jingjin; Randolph, Robert; Simmons, Gary; Hull, Patrick V.; Mazzeo, Aaron D.

    2017-01-01

    This work presents an approach to modeling the dynamic spreading and curing behavior of thermosets in nozzle-based extrusions. Thermosets cover a wide range of materials, some of which permit low-temperature processing with subsequent high-temperature and high-strength working properties. Extruding thermosets may overcome the limited working temperatures and strengths of conventional thermoplastic materials used in additive manufacturing. This project aims to produce technology for the fabrication of thermoset-based structures leveraging advances made in nozzle-based extrusion, such as fused deposition modeling (FDM), material jetting, and direct writing. Understanding the synergistic interactions between spreading and fast curing of extruded thermosetting materials will provide essential insights for applications that require accurate dimensional controls, such as additive manufacturing [1], [2] and centrifugal coating/forming [3]. Two types of thermally curing thermosets -- one being a soft silicone (Ecoflex 0050) and the other being a toughened epoxy (G/Flex) -- served as the test materials in this work to obtain models for cure kinetics and viscosity. The developed models align with extensive measurements made with differential scanning calorimetry (DSC) and rheology. DSC monitors the change in the heat of reaction, which reflects the rate and degree of cure at different crosslinking stages. Rheology measures the change in complex viscosity, shear moduli, yield stress, and other properties dictated by chemical composition. By combining DSC and rheological measurements, it is possible to establish a set of models profiling the cure kinetics and chemorheology without prior knowledge of chemical composition, which is usually necessary for sophisticated mechanistic modeling. In this work, we conducted both isothermal and dynamic measurements with both DSC and rheology. With the developed models, numerical simulations yielded predictions of diameter and height of

  8. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  9. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  10. Fast estimate of Hartley entropy in image sharpening

    Science.gov (United States)

    Krbcová, Zuzana; Kukal, Jaromír.; Svihlik, Jan; Fliegel, Karel

    2016-09-01

    Two classes of linear IIR filters: Laplacian of Gaussian (LoG) and Difference of Gaussians (DoG) are frequently used as high pass filters for contextual vision and edge detection. They are also used for image sharpening when linearly combined with the original image. Resulting sharpening filters are radially symmetric in spatial and frequency domains. Our approach is based on the radial approximation of unknown optimal filter, which is designed as a weighted sum of Gaussian filters with various radii. The novel filter is designed for MRI image enhancement where the image intensity represents anatomical structure plus additive noise. We prefer the gradient norm of Hartley entropy of whole image intensity as a measure which has to be maximized for the best sharpening. The entropy estimation procedure is as fast as FFT included in the filter but this estimate is a continuous function of enhanced image intensities. Physically motivated heuristic is used for optimum sharpening filter design by its parameter tuning. Our approach is compared with Wiener filter on MRI images.

  11. Advanced fuel cycle cost estimation model and its cost estimation results for three nuclear fuel cycles using a dynamic model in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sungki, E-mail: sgkim1@kaeri.re.kr [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Ko, Wonil [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Youn, Saerom; Gao, Ruxing [University of Science and Technology, 217 Gajungro, Yuseong-gu, Daejeon 305-350 (Korea, Republic of); Bang, Sungsig, E-mail: ssbang@kaist.ac.kr [Korea Advanced Institute of Science and Technology, Department of Business and Technology Management, 291 Deahak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2015-11-15

    Highlights: • The nuclear fuel cycle cost using a new cost estimation model was analyzed. • The material flows of three nuclear fuel cycle options were calculated. • The generation cost of once-through was estimated to be 66.88 mills/kW h. • The generation cost of pyro-SFR recycling was estimated to be 78.06 mills/kW h. • The reactor cost was identified as the main cost driver of pyro-SFR recycling. - Abstract: The present study analyzes advanced nuclear fuel cycle cost estimation models such as the different discount rate model and its cost estimation results. To do so, an analysis of the nuclear fuel cycle cost of three options (direct disposal (once through), PWR–MOX (Mixed OXide fuel), and Pyro-SFR (Sodium-cooled Fast Reactor)) from the viewpoint of economic sense, focusing on the cost estimation model, was conducted using a dynamic model. From an analysis of the fuel cycle cost estimation results, it was found that some cost gap exists between the traditional same discount rate model and the advanced different discount rate model. However, this gap does not change the priority of the nuclear fuel cycle option from the viewpoint of economics. In addition, the fuel cycle costs of OT (Once-Through) and Pyro-SFR recycling based on the most likely value using a probabilistic cost estimation except for reactor costs were calculated to be 8.75 mills/kW h and 8.30 mills/kW h, respectively. Namely, the Pyro-SFR recycling option was more economical than the direct disposal option. However, if the reactor cost is considered, the economic sense in the generation cost between the two options (direct disposal vs. Pyro-SFR recycling) can be changed because of the high reactor cost of an SFR.

  12. Simple Attenauation Models of Metallic Cables Suitable for G.fast Frequencies

    Directory of Open Access Journals (Sweden)

    Pavel Lafata

    2015-01-01

    Full Text Available Recently, a new xDSL successor called G.fast, which can occupy frequencies up to 106 or 212~MHz, has been introduced in ITU-T G.9700 series of recommendations. Moreover, a new model of transmission characteristics suitable for various types of metallic cables has been designed and described as well. The model is based on 9 parameters specified for each type of metallic cable and can provide accurate estimations. However, its complexity together with the number of required parameters makes its practical application questionable, since the most important metallic cable characteristic, the attenuation, can be estimated using much simpler models. Therefore, two innovative attenuation models suitable for frequencies up to 250 MHz were designed and they will be introduced in this paper. The main motivation was to achieve an accurate approximation of attenuation character for various types of metallic cables, while maintaining low mathematical complexity and a number of necessary parameters. Both models were compared with attenuation characteristics measured for variety types of real metallic cables and also with other standard attenuation models. The results are included in this article as well.

  13. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  14. Model-Based Estimation of Ankle Joint Stiffness.

    Science.gov (United States)

    Misgeld, Berno J E; Zhang, Tony; Lüken, Markus J; Leonhardt, Steffen

    2017-03-29

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model's inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  15. Fast Estimation of Expected Information Gain for Bayesian Experimental Design Based on Laplace Approximation

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2014-01-01

    Shannon-type expected information gain is an important utility in evaluating the usefulness of a proposed experiment that involves uncertainty. Its estimation, however, cannot rely solely on Monte Carlo sampling methods, that are generally too computationally expensive for realistic physical models, especially for those involving the solution of stochastic partial differential equations. In this work we present a new methodology, based on the Laplace approximation of the posterior probability density function, to accelerate the estimation of expected information gain in the model parameters and predictive quantities of interest. Furthermore, in order to deal with the issue of dimensionality in a complex problem, we use sparse quadratures for the integration over the prior. We show the accuracy and efficiency of the proposed method via several nonlinear numerical examples, including a single parameter design of one dimensional cubic polynomial function and the current pattern for impedance tomography.

  16. Fast Estimation of Expected Information Gain for Bayesian Experimental Design Based on Laplace Approximation

    KAUST Repository

    Long, Quan

    2014-01-06

    Shannon-type expected information gain is an important utility in evaluating the usefulness of a proposed experiment that involves uncertainty. Its estimation, however, cannot rely solely on Monte Carlo sampling methods, that are generally too computationally expensive for realistic physical models, especially for those involving the solution of stochastic partial differential equations. In this work we present a new methodology, based on the Laplace approximation of the posterior probability density function, to accelerate the estimation of expected information gain in the model parameters and predictive quantities of interest. Furthermore, in order to deal with the issue of dimensionality in a complex problem, we use sparse quadratures for the integration over the prior. We show the accuracy and efficiency of the proposed method via several nonlinear numerical examples, including a single parameter design of one dimensional cubic polynomial function and the current pattern for impedance tomography.

  17. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  18. Fast freeze-drying cycle design and optimization using a PAT based on the measurement of product temperature.

    Science.gov (United States)

    Bosca, Serena; Barresi, Antonello A; Fissore, Davide

    2013-10-01

    This paper is focused on the use of an innovative Process Analytical Technology for the fast design and optimization of freeze-drying cycles for pharmaceuticals. The tool is based on a soft-sensor, a device that uses the experimental measure of product temperature during freeze-drying, a mathematical model of the process, and the Extended Kalman Filter algorithm to estimate the sublimation flux, the residual amount of ice in the vial, and some model parameters (heat and mass transfer coefficients). The accuracy of the estimations provided by the soft-sensor has been shown using as test case aqueous solutions containing different excipients (sucrose, polyvinylpyrrolidone), processed at various operating conditions, pointing out that the soft-sensor allows a fast estimation of model parameters and product dynamics without involving expensive hardware or time consuming analysis. The possibility of using the soft-sensor to calculate in-line (or off-line) the design space of the primary drying phase is here presented and discussed. Results evidences that by this way, it is possible to identify the values of the heating fluid temperature that maintain product temperature below the limit value, as well as the operating conditions that maximize the sublimation flux. Various experiments have been carried out to test the effectiveness of the proposed approach for a fast design of the cycle, evidencing that drying time can be significantly reduced, without impairing product quality. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  20. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  1. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    Science.gov (United States)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  2. A Web-Based Model to Estimate the Impact of Best Management Practices

    Directory of Open Access Journals (Sweden)

    Youn Shik Park

    2014-03-01

    Full Text Available The Spreadsheet Tool for the Estimation of Pollutant Load (STEPL can be used for Total Maximum Daily Load (TMDL processes, since the model is capable of simulating the impacts of various best management practices (BMPs and low impact development (LID practices. The model computes average annual direct runoff using the Soil Conservation Service Curve Number (SCS-CN method with average rainfall per event, which is not a typical use of the SCS-CN method. Five SCS-CN-based approaches to compute average annual direct runoff were investigated to explore estimated differences in average annual direct runoff computations using daily precipitation data collected from the National Climate Data Center and generated by the CLIGEN model for twelve stations in Indiana. Compared to the average annual direct runoff computed for the typical use of the SCS-CN method, the approaches to estimate average annual direct runoff within EPA STEPL showed large differences. A web-based model (STEPL WEB was developed with a corrected approach to estimate average annual direct runoff. Moreover, the model was integrated with the Web-based Load Duration Curve Tool, which identifies the least cost BMPs for each land use and optimizes BMP selection to identify the most cost-effective BMP implementations. The integrated tools provide an easy to use approach for performing TMDL analysis and identifying cost-effective approaches for controlling nonpoint source pollution.

  3. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  4. Model-Based Estimation of Ankle Joint Stiffness

    Directory of Open Access Journals (Sweden)

    Berno J. E. Misgeld

    2017-03-01

    Full Text Available We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  5. Model-Based Estimation of Ankle Joint Stiffness

    Science.gov (United States)

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  6. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Marseguerra, M.; Zio, E.; Canetta, R. [Polytechnic of Milan, Dept. of Nuclear Engineering, Milano (Italy)

    2005-07-01

    The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

  7. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

    International Nuclear Information System (INIS)

    Marseguerra, M.; Zio, E.; Canetta, R.

    2005-01-01

    The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

  8. Bias correction for the estimation of sensitivity indices based on random balance designs

    International Nuclear Information System (INIS)

    Tissot, Jean-Yves; Prieur, Clémentine

    2012-01-01

    This paper deals with the random balance design method (RBD) and its hybrid approach, RBD-FAST. Both these global sensitivity analysis methods originate from Fourier amplitude sensitivity test (FAST) and consequently face the main problems inherent to discrete harmonic analysis. We present here a general way to correct a bias which occurs when estimating sensitivity indices (SIs) of any order – except total SI of single factor or group of factors – by the random balance design method (RBD) and its hybrid version, RBD-FAST. In the RBD case, this positive bias has been recently identified in a paper by Xu and Gertner [1]. Following their work, we propose a bias correction method for first-order SIs estimates in RBD. We then extend the correction method to the SIs of any order in RBD-FAST. At last, we suggest an efficient strategy to estimate all the first- and second-order SIs using RBD-FAST. - Highlights: ► We provide a bias correction method for the global sensitivity analysis methods: RBD and RBD-FAST. ► In RBD, first-order sensitivity estimates are corrected. ► In RBD-FAST, sensitivity indices of any order and closed sensitivity indices are corrected. ► We propose an efficient strategy to estimate all the first- and second-order indices of a model.

  9. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem

    OpenAIRE

    Muller , Antoine; Pontonnier , Charles; Dumont , Georges

    2018-01-01

    International audience; The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions – two polynomial criteria and a min/max criterion – were tested on a planar musculoskeletal model. The MusIC method provides a computation frequenc...

  10. Fast detection and modeling of human-body parts from monocular video

    NARCIS (Netherlands)

    Lao, W.; Han, Jungong; With, de P.H.N.; Perales, F.J.; Fisher, R.B.

    2009-01-01

    This paper presents a novel and fast scheme to detect different body parts in human motion. Using monocular video sequences, trajectory estimation and body modeling of moving humans are combined in a co-operating processing architecture. More specifically, for every individual person, features of

  11. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    International Nuclear Information System (INIS)

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-01-01

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  12. Multiobjective Traffic Signal Control Model for Intersection Based on Dynamic Turning Movements Estimation

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2014-01-01

    Full Text Available The real-time traffic signal control for intersection requires dynamic turning movements as the basic input data. It is impossible to detect dynamic turning movements directly through current traffic surveillance systems, but dynamic origin-destination (O-D estimation can obtain it. However, the combined models of dynamic O-D estimation and real-time traffic signal control are rare in the literature. A framework for the multiobjective traffic signal control model for intersection based on dynamic O-D estimation (MSC-DODE is presented. A state-space model using Kalman filtering is first formulated to estimate the dynamic turning movements; then a revised sequential Kalman filtering algorithm is designed to solve the model, and the root mean square error and mean percentage error are used to evaluate the accuracy of estimated dynamic turning proportions. Furthermore, a multiobjective traffic signal control model is put forward to achieve real-time signal control parameters and evaluation indices. Finally, based on practical survey data, the evaluation indices from MSC-DODE are compared with those from Webster method. The actual and estimated turning movements are further input into MSC-DODE, respectively, and results are also compared. Case studies show that results of MSC-DODE are better than those of Webster method and are very close to unavailable actual values.

  13. Biomechanical model-based displacement estimation in micro-sensor motion capture

    International Nuclear Information System (INIS)

    Meng, X L; Sun, S Y; Wu, J K; Zhang, Z Q; 3 Building, 21 Heng Mui Keng Terrace (Singapore))" data-affiliation=" (Department of Electrical and Computer Engineering, National University of Singapore (NUS), 02-02-10 I3 Building, 21 Heng Mui Keng Terrace (Singapore))" >Wong, W C

    2012-01-01

    In micro-sensor motion capture systems, the estimation of the body displacement in the global coordinate system remains a challenge due to lack of external references. This paper proposes a self-contained displacement estimation method based on a human biomechanical model to track the position of walking subjects in the global coordinate system without any additional supporting infrastructures. The proposed approach makes use of the biomechanics of the lower body segments and the assumption that during walking there is always at least one foot in contact with the ground. The ground contact joint is detected based on walking gait characteristics and used as the external references of the human body. The relative positions of the other joints are obtained from hierarchical transformations based on the biomechanical model. Anatomical constraints are proposed to apply to some specific joints of the lower body to further improve the accuracy of the algorithm. Performance of the proposed algorithm is compared with an optical motion capture system. The method is also demonstrated in outdoor and indoor long distance walking scenarios. The experimental results demonstrate clearly that the biomechanical model improves the displacement accuracy within the proposed framework. (paper)

  14. Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon

    Science.gov (United States)

    Nelson, Ross

    2008-01-01

    ICESat/GLAS waveform data are used to estimate biomass and carbon on a 1.27 million sq km study area. the Province of Quebec, Canada, below treeline. The same input data sets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include nonstratified and stratified versions of a multiple linear model where either biomass or (square root of) biomass serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial biomass estimates of up to 0.35 Gt (range 4.942+/-0.28 Gt to 5.29+/-0.36 Gt). The results suggest that if different predictive models are used to estimate regional carbon stocks in different epochs, e.g., y2005, y2015, one might mistakenly infer an apparent aboveground carbon "change" of, in this case, 0.18 Gt, or approximately 7% of the aboveground carbon in Quebec, due solely to the use of different predictive models. These findings argue for model consistency in future, LiDAR-based carbon monitoring programs. Regional biomass estimates from the four GLAS models are compared to ground estimates derived from an extensive network of 16,814 ground plots located in southern Quebec. Stratified models proved to be more accurate and precise than either of the two nonstratified models tested.

  15. Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation

    Science.gov (United States)

    Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou

    2018-06-01

    Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.

  16. Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation

    Science.gov (United States)

    Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou

    2018-03-01

    Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.

  17. Integral-equation based methods for parameter estimation in output pulses of radiation detectors: Application in nuclear medicine and spectroscopy

    Science.gov (United States)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-04-01

    Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

  18. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  19. Vision-based stress estimation model for steel frame structures with rigid links

    Science.gov (United States)

    Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan

    2017-07-01

    This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.

  20. Evaluation of a morphing based method to estimate muscle attachment sites of the lower extremity.

    Science.gov (United States)

    Pellikaan, P; van der Krogt, M M; Carbone, V; Fluit, R; Vigneron, L M; Van Deun, J; Verdonschot, N; Koopman, H F J M

    2014-03-21

    To generate subject-specific musculoskeletal models for clinical use, the location of muscle attachment sites needs to be estimated with accurate, fast and preferably automated tools. For this purpose, an automatic method was used to estimate the muscle attachment sites of the lower extremity, based on the assumption of a relation between the bone geometry and the location of muscle attachment sites. The aim of this study was to evaluate the accuracy of this morphing based method. Two cadaver dissections were performed to measure the contours of 72 muscle attachment sites on the pelvis, femur, tibia and calcaneus. The geometry of the bones including the muscle attachment sites was morphed from one cadaver to the other and vice versa. For 69% of the muscle attachment sites, the mean distance between the measured and morphed muscle attachment sites was smaller than 15 mm. Furthermore, the muscle attachment sites that had relatively large distances had shown low sensitivity to these deviations. Therefore, this morphing based method is a promising tool for estimating subject-specific muscle attachment sites in the lower extremity in a fast and automated manner. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Modelling of Moving Coil Actuators in Fast Switching Valves Suitable for Digital Hydraulic Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Roemer, Daniel Beck; Bech, Michael Møller

    2015-01-01

    an estimation of the eddy currents generated in the actuator yoke upon current rise, as they may have significant influence on the coil current response. The analytical model facilitates fast simulation of the transient actuator response opposed to the transient electro-magnetic finite element model which......The efficiency of digital hydraulic machines is strongly dependent on the valve switching time. Recently, fast switching have been achieved by using a direct electromagnetic moving coil actuator as the force producing element in fast switching hydraulic valves suitable for digital hydraulic...... machines. Mathematical models of the valve switching, targeted for design optimisation of the moving coil actuator, are developed. A detailed analytical model is derived and presented and its accuracy is evaluated against transient electromagnetic finite element simulations. The model includes...

  2. Model-based estimation with boundary side information or boundary regularization

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Fessler, J.A.; Clinthorne, N.H.; Hero, A.O.

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (Emission Computed Tomography). The authors have also reported difficulties with boundary estimation in low contrast and low count rate situations. In this paper, the authors propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, the authors introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. The authors implement boundary regularization through formulating a penalized log-likelihood function. The authors also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information

  3. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr [Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Lee, Taewon; Cho, Seungryong [Medical Imaging and Radiotherapeutics Laboratory, Department of Nuclear and Quantum Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Seong, Younghun; Lee, Jongha; Jang, Kwang Eun [Samsung Advanced Institute of Technology, Samsung Electronics, 130, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 443-803 (Korea, Republic of); Choi, Jaegu; Choi, Young Wook [Korea Electrotechnology Research Institute (KERI), 111, Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426-170 (Korea, Republic of); Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro, 43-gil, Songpa-gu, Seoul, 138-736 (Korea, Republic of)

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue composition for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite

  4. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    Directory of Open Access Journals (Sweden)

    Lujiang Liu

    2016-06-01

    Full Text Available Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective.

  5. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  6. Comparing cropland net primary production estimates from inventory, a satellite-based model, and a process-based model in the Midwest of the United States

    Science.gov (United States)

    Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi; Bliss, Norman B.; Young, Claudia J.; West, Tristram O.; Ogle, Stephen M.

    2014-01-01

    Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m−2 yr−1and total NPP in the range of 318–490 Tg C yr−1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m−2 yr−1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m−2 yr−1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. We suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.

  7. Evaluation of neutron streaming in fast breeder reactor fuel assembly by double heterogeneous modelling

    International Nuclear Information System (INIS)

    Unesaki, Hironobu; Takeda, Toshikazu

    1988-01-01

    Neutron streaming in a fast breeder reactor fuel assembly caused by the double heterogeneity structure is estimated by double heterogeneous modelling. The conventional pin cell model, a two-region subassembly model and the exact pin cluster model are used to take into account the streaming effect caused by the pin cell structure and the surrounding wrapper tube structure. The heterogeneity of wrapper tube and its surrounding sodium is explicitly considered. The streaming effect is evaluated based on Benoist's diffusion coefficient. The total streaming effect caused by the double heterogeneity structure of a fuel subassembly is found to be -0.2 % dk/kk' for k eff , which is almost twice that obtained from the conventional pin cell model of -0.1 % dk/kk'. (author)

  8. A method for rapid estimation of internal dose to members of the public from inhalation of mixed fission products (based on the ICRP 1994 human respiratory tract model for radiological protection)

    International Nuclear Information System (INIS)

    Hou Jieli

    1999-01-01

    Based on the computing principle given in ICRP-30, a method had been given by the author for fast estimating internal dose from an intake of mixed fission products after nuclear accident. Following the ICRP-66 Human respiratory tract model published in 1994, the method was reconstructed. The doses of 1 Bq intake of mixed fission products (its AMAD = 1 μm, decay rate coefficient n = 0.2∼2.0) during the period of 1∼15 d after an accident were calculated. It is lower slightly based on ICRP 1994 respiratory tract model than that based on ICRP-30 model

  9. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  10. Consumer estimation of recommended and actual calories at fast food restaurants.

    Science.gov (United States)

    Elbel, Brian

    2011-10-01

    Recently, localities across the United States have passed laws requiring the mandatory labeling of calories in all chain restaurants, including fast food restaurants. This policy is set to be implemented at the federal level. Early studies have found these policies to be at best minimally effective in altering food choice at a population level. This paper uses receipt and survey data collected from consumers outside fast food restaurants in low-income communities in New York City (NYC) (which implemented labeling) and a comparison community (which did not) to examine two fundamental assumptions necessary (though not sufficient) for calorie labeling to be effective: that consumers know how many calories they should be eating throughout the course of a day and that currently customers improperly estimate the number of calories in their fast food order. Then, we examine whether mandatory menu labeling influences either of these assumptions. We find that approximately one-third of consumers properly estimate that the number of calories an adult should consume daily. Few (8% on average) believe adults should be eating over 2,500 calories daily, and approximately one-third believe adults should eat lesser than 1,500 calories daily. Mandatory labeling in NYC did not change these findings. However, labeling did increase the number of low-income consumers who correctly estimated (within 100 calories) the number of calories in their fast food meal, from 15% before labeling in NYC increasing to 24% after labeling. Overall knowledge remains low even with labeling. Additional public policies likely need to be considered to influence obesity on a large scale.

  11. Simplification of an MCNP model designed for dose rate estimation

    Science.gov (United States)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  12. Satellite-based ET estimation using Landsat 8 images and SEBAL model

    Directory of Open Access Journals (Sweden)

    Bruno Bonemberger da Silva

    Full Text Available ABSTRACT Estimation of evapotranspiration is a key factor to achieve sustainable water management in irrigated agriculture because it represents water use of crops. Satellite-based estimations provide advantages compared to direct methods as lysimeters especially when the objective is to calculate evapotranspiration at a regional scale. The present study aimed to estimate the actual evapotranspiration (ET at a regional scale, using Landsat 8 - OLI/TIRS images and complementary data collected from a weather station. SEBAL model was used in South-West Paraná, region composed of irrigated and dry agricultural areas, native vegetation and urban areas. Five Landsat 8 images, row 223 and path 78, DOY 336/2013, 19/2014, 35/2014, 131/2014 and 195/2014 were used, from which ET at daily scale was estimated as a residual of the surface energy balance to produce ET maps. The steps for obtain ET using SEBAL include radiometric calibration, calculation of the reflectance, surface albedo, vegetation indexes (NDVI, SAVI and LAI and emissivity. These parameters were obtained based on the reflective bands of the orbital sensor with temperature surface estimated from thermal band. The estimated ET values in agricultural areas, native vegetation and urban areas using SEBAL algorithm were compatible with those shown in the literature and ET errors between the ET estimates from SEBAL model and Penman Monteith FAO 56 equation were less than or equal to 1.00 mm day-1.

  13. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  14. Fast And Flexible Modelling Of Real-Time Systems With Rtcp-Nets

    Directory of Open Access Journals (Sweden)

    Marcin Szpyrka

    2004-01-01

    Full Text Available A large number of formalisms has been proposed for real-time systems modelling. However, formal methods are not widely used in industrial software development. Such a situation could be treated as a result of a lack of suitable tools for fast designing of a model, its analysis and modification. RTCP-nets have been defined to facilitate fast modelling of embedded systems incorporating rule-based systems. Computer tools that are being developed for RTCP-nets, use a template mechanism to allow users to design models and manipulate its properties fast and effectively. Both theoretical and practical aspects of RTCP-nets are presented in the paper.

  15. Fast and Flexible Modelling of Real-Time Systems with RTCP-Nets

    Directory of Open Access Journals (Sweden)

    Marcin Szpyrka

    2004-01-01

    Full Text Available A large number of formalisms has been proposed for real-time systems modelling. However, formal methods are not widely used in industrial software development. Such a situation could be treated as a result of a lack of suitable tools for fast designing of a model, its analysis and modification. RTCP-nets have been defined to facilitate fast modelling of embedded systems incorporating rule-based systems. Computer tools that are being developed for RTCP-nets, use a template mechanism to allow users to design models and manipulate its properties fast and effectively. Both theoretical and practical aspects of RTCP-nets are presented in the paper.

  16. Estimation of time-variable fast flow path chemical concentrations for application in tracer-based hydrograph separation analyses

    Science.gov (United States)

    Kronholm, Scott C.; Capel, Paul D.

    2016-01-01

    Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.

  17. Ideas for fast accelerator model calibration

    International Nuclear Information System (INIS)

    Corbett, J.

    1997-05-01

    With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10 3 model variables to fit the order of 10 5 measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach

  18. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  19. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Fast Emission Estimates in China Constrained by Satellite Observations (Invited)

    Science.gov (United States)

    Mijling, B.; van der A, R.

    2013-12-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for an emerging economy such as China, where rapid economic growth changes emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. Constraining emissions from concentration measurements is, however, computationally challenging. Within the GlobEmission project of the European Space Agency (ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China, using the CHIMERE model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e.g. shipping emissions). The new emission estimates result in a better

  1. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  2. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    Energy Technology Data Exchange (ETDEWEB)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com [Physics Department, Federal University of Santa Catarina, Florianópolis (Brazil); Grana, Dario [Department of Geology and Geophysics, University of Wyoming, Laramie (United States); Santos, Marcio; Figueiredo, Wagner [Physics Department, Federal University of Santa Catarina, Florianópolis (Brazil); Roisenberg, Mauro [Informatic and Statistics Department, Federal University of Santa Catarina, Florianópolis (Brazil); Schwedersky Neto, Guenther [Petrobras Research Center, Rio de Janeiro (Brazil)

    2017-05-01

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well data multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.

  3. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gotseff, Peter [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear sky model performance.

  4. Simplification of an MCNP model designed for dose rate estimation

    Directory of Open Access Journals (Sweden)

    Laptev Alexander

    2017-01-01

    Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  5. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  6. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    Science.gov (United States)

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  7. Developing a new solar radiation estimation model based on Buckingham theorem

    Science.gov (United States)

    Ekici, Can; Teke, Ismail

    2018-06-01

    While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.

  8. V2676 Oph: Estimating Physical Parameters of a Moderately Fast Nova

    Science.gov (United States)

    Raj, A.; Pavana, M.; Kamath, U. S.; Anupama, G. C.; Walter, F. M.

    2018-03-01

    Using our previously reported observations, we derive some physical parameters of the moderately fast nova V2676 Oph 2012 #1. The best-fit Cloudy model of the nebular spectrum obtained on 2015 May 8 shows a hot white dwarf source with TBB≍1.0×105 K having a luminosity of 1.0×1038 erg/s. Our abundance analysis shows that the ejecta are significantly enhanced relative to solar, He/H=2.14, O/H=2.37, S/H=6.62 and Ar/H=3.25. The ejecta mass is estimated to be 1.42×10-5 M⊙. The nova showed a pronounced dust formation phase after 90 d from discovery. The J-H and H-K colors were very large as compared to other molecule- and dust-forming novae in recent years. The dust temperature and mass at two epochs have been estimated from spectral energy distribution fits to infrared photometry.

  9. Fast covariance estimation for innovations computed from a spatial Gibbs point process

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Rubak, Ege

    In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...

  10. Comparison of physically based catchment models for estimating Phosphorus losses

    OpenAIRE

    Nasr, Ahmed Elssidig; Bruen, Michael

    2003-01-01

    As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...

  11. Location-based Mobile Relay Selection and Impact of Inaccurate Path Loss Model Parameters

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2010-01-01

    In this paper we propose a relay selection scheme which uses collected location information together with a path loss model for relay selection, and analyze the performance impact of mobility and different error causes on this scheme. Performance is evaluated in terms of bit error rate...... by simulations. The SNR measurement based relay selection scheme proposed previously is unsuitable for use with fast moving users in e.g. vehicular scenarios due to a large signaling overhead. The proposed location based scheme is shown to work well with fast moving users due to a lower signaling overhead...... in these situations. As the location-based scheme relies on a path loss model to estimate link qualities and select relays, the sensitivity with respect to inaccurate estimates of the unknown path loss model parameters is investigated. The parameter ranges that result in useful performance were found...

  12. Promotion and Fast Food Demand

    OpenAIRE

    Timothy J. Richards; Luis Padilla

    2009-01-01

    Many believe that fast food promotion is a significant cause of the obesity epidemic in North America. Industry members argue that promotion only reallocates brand shares and does not increase overall demand. We study the effect of fast food promotion on market share and total demand by estimating a discrete / continuous model of fast food restaurant choice and food expenditure that explicitly accounts for both spatial and temporal determinants of demand. Estimates are obtained using a unique...

  13. A method for state of energy estimation of lithium-ion batteries based on neural network model

    International Nuclear Information System (INIS)

    Dong, Guangzhong; Zhang, Xu; Zhang, Chenbin; Chen, Zonghai

    2015-01-01

    The state-of-energy is an important evaluation index for energy optimization and management of power battery systems in electric vehicles. Unlike the state-of-charge which represents the residual energy of the battery in traditional applications, state-of-energy is integral result of battery power, which is the product of current and terminal voltage. On the other hand, like state-of-charge, the state-of-energy has an effect on terminal voltage. Therefore, it is hard to solve the nonlinear problems between state-of-energy and terminal voltage, which will complicate the estimation of a battery's state-of-energy. To address this issue, a method based on wavelet-neural-network-based battery model and particle filter estimator is presented for the state-of-energy estimation. The wavelet-neural-network based battery model is used to simulate the entire dynamic electrical characteristics of batteries. The temperature and discharge rate are also taken into account to improve model accuracy. Besides, in order to suppress the measurement noises of current and voltage, a particle filter estimator is applied to estimate cell state-of-energy. Experimental results on LiFePO_4 batteries indicate that the wavelet-neural-network based battery model simulates battery dynamics robustly with high accuracy and the estimation value based on the particle filter estimator converges to the real state-of-energy within an error of ±4%. - Highlights: • State-of-charge is replaced by state-of-energy to determine cells residual energy. • The battery state-space model is established based on a neural network. • Temperature and current influence are considered to improve the model accuracy. • The particle filter is used for state-of-energy estimation to improve accuracy. • The robustness of new method is validated under dynamic experimental conditions.

  14. A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data

    Science.gov (United States)

    Barnes, J. R.

    1993-01-01

    Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.

  15. A Robust Productivity Model for Grapple Yarding in Fast-Growing Tree Plantations

    Directory of Open Access Journals (Sweden)

    Riaan Engelbrecht

    2017-10-01

    Full Text Available New techniques have recently appeared that can extend the advantages of grapple yarding to fast-growing plantations. The most promising technique consists of an excavator-base un-guyed yarder equipped with new radio-controlled grapple carriages, fed by another excavator stationed on the cut-over. This system is very productive, avoids in-stand traffic, and removes operators from positions of high risk. This paper presents the results of a long-term study conducted on 12 different teams equipped with the new technology, operating in the fast-growing black wattle (Acacia mangium Willd plantations of Sarawak, Malaysia. Data were collected continuously for almost 8 months and represented 555 shifts, or over 55,000 cycles—each recorded individually. Production, utilization, and machine availability were estimated, respectively at: 63 m3 per productive machine hour (excluding all delays, 63% and 93%. Regression analysis of experimental data yielded a strong productivity forecast model that was highly significant, accounted for 50% of the total variability in the dataset and was validated with a non-significant error estimated at less than 1%. The figures reported in this study are especially robust, because they were obtained from a long-term study that covered multiple teams and accumulated an exceptionally large number of observations.

  16. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    Science.gov (United States)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  17. Consumers’ estimation of calorie content at fast food restaurants: cross sectional observational study

    OpenAIRE

    Block, Jason Perry; Condon, Suzanne K; Kleinman, Ken Paul; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl Lynn; Gillman, Matthew William

    2013-01-01

    Objective: To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Design: Cross sectional study of repeated visits to fast food restaurant chains. Setting: 89 fast food restaurants in four cities in New England, United States: McDonald’s, Burger King, Subway, Wendy’s, KFC, Dunkin’ Donuts. Participants: 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1...

  18. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    Science.gov (United States)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  19. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)

  20. Finite Time Fault Tolerant Control for Robot Manipulators Using Time Delay Estimation and Continuous Nonsingular Fast Terminal Sliding Mode Control.

    Science.gov (United States)

    Van, Mien; Ge, Shuzhi Sam; Ren, Hongliang

    2016-04-28

    In this paper, a novel finite time fault tolerant control (FTC) is proposed for uncertain robot manipulators with actuator faults. First, a finite time passive FTC (PFTC) based on a robust nonsingular fast terminal sliding mode control (NFTSMC) is investigated. Be analyzed for addressing the disadvantages of the PFTC, an AFTC are then investigated by combining NFTSMC with a simple fault diagnosis scheme. In this scheme, an online fault estimation algorithm based on time delay estimation (TDE) is proposed to approximate actuator faults. The estimated fault information is used to detect, isolate, and accommodate the effect of the faults in the system. Then, a robust AFTC law is established by combining the obtained fault information and a robust NFTSMC. Finally, a high-order sliding mode (HOSM) control based on super-twisting algorithm is employed to eliminate the chattering. In comparison to the PFTC and other state-of-the-art approaches, the proposed AFTC scheme possess several advantages such as high precision, strong robustness, no singularity, less chattering, and fast finite-time convergence due to the combined NFTSMC and HOSM control, and requires no prior knowledge of the fault due to TDE-based fault estimation. Finally, simulation results are obtained to verify the effectiveness of the proposed strategy.

  1. Analysis of fast boundary-integral approximations for modeling electrostatic contributions of molecular binding

    Science.gov (United States)

    Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.

    2013-01-01

    We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561

  2. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    Science.gov (United States)

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  3. Estimation of post disruption plasma temperature for fast current quench Aditya plasma shots

    International Nuclear Information System (INIS)

    Purohit, S.; Chowdhuri, M.B.; Joisa, Y.S.; Raval, J.V.; Ghosh, J.; Jha, R.

    2013-01-01

    Characteristics of tokamak current quenches are an important issue for the determination of electromagnetic forces that act on the in-vessel components and vacuum vessel during major disruptions. It is observed that thermal quench is followed by a sharp current decay. Fast current quench disruptive plasma shots were investigated for ADITYA tokamak. The current decay time was determined for the selected shots, which were in the range of 0.8 msec to 2.5 msec. This current decay information was then applied to L/R model, frequently employed for the estimation of the current decay time in tokamak plasmas, considering plasma inductance and plasma resistivity. This methodology was adopted for the estimation of the post disruption plasma temperature using the experimentally observed current decay time for the fast current quench disruptive ADITYA plasma shots. The study reveals that for the identified shots there is a constant increase in the current decay time with the post disruption plasma temperature. The investigations also explore the behavior post disruption plasma temperature and the current decay time as a function of the edge safety factor, Q. Post disruption plasma temperature and the current decay time exhibits a decrease with the increase in the value Q. (author)

  4. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2013-01-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  5. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan

    2013-06-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  6. Calculational model based on influence function method for power distribution and control rod worth in fast reactors

    International Nuclear Information System (INIS)

    Sanda, T.; Azekura, K.

    1983-01-01

    A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size

  7. On Channel Estimation for OFDM/TDM Using MMSE-FDE in a Fast Fading Channel

    Directory of Open Access Journals (Sweden)

    Gacanin Haris

    2009-01-01

    Full Text Available Abstract MMSE-FDE can improve the transmission performance of OFDM combined with time division multiplexing (OFDM/TDM, but knowledge of the channel state information and the noise variance is required to compute the MMSE weight. In this paper, a performance evaluation of OFDM/TDM using MMSE-FDE with pilot-assisted channel estimation over a fast fading channel is presented. To improve the tracking ability against fast fading a robust pilot-assisted channel estimation is presented that uses time-domain filtering on a slot-by-slot basis and frequency-domain interpolation. We derive the mean square error (MSE of the channel estimator and then discuss a tradeoff between improving the tracking ability against fading and the noise reduction. The achievable bit error rate (BER performance is evaluated by computer simulation and compared with conventional OFDM. It is shown that the OFDM/TDM using MMSE-FDE achieves a lower BER and a better tracking ability against fast fading in comparison with conventional OFDM.

  8. Fast fundamental frequency estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2017-01-01

    Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...

  9. Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    A method for estimating the noise covariance matrix in a mul- tichannel setup is proposed. The method is based on the iter- ative adaptive approach (IAA), which only needs short seg- ments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals....... The method is based on an assumption of the desired signal being harmonic, which is used for estimating the noise covariance matrix from the covariance matrix of the observed signal. The noise co- variance estimate is used in the linearly constrained minimum variance (LCMV) filter and compared...

  10. Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.

    Science.gov (United States)

    Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li

    2017-05-03

    In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.

  11. Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David

    2015-01-01

    recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....

  12. Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures

    International Nuclear Information System (INIS)

    Procacci, Piero

    2015-01-01

    In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of only two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems

  13. Fast and efficient indexing approach for object recognition

    Science.gov (United States)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  14. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    Science.gov (United States)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  15. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  16. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  17. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-08-13

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  18. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  19. Covariance-based synaptic plasticity in an attractor network model accounts for fast adaptation in free operant learning.

    Science.gov (United States)

    Neiman, Tal; Loewenstein, Yonatan

    2013-01-23

    In free operant experiments, subjects alternate at will between targets that yield rewards stochastically. Behavior in these experiments is typically characterized by (1) an exponential distribution of stay durations, (2) matching of the relative time spent at a target to its relative share of the total number of rewards, and (3) adaptation after a change in the reward rates that can be very fast. The neural mechanism underlying these regularities is largely unknown. Moreover, current decision-making neural network models typically aim at explaining behavior in discrete-time experiments in which a single decision is made once in every trial, making these models hard to extend to the more natural case of free operant decisions. Here we show that a model based on attractor dynamics, in which transitions are induced by noise and preference is formed via covariance-based synaptic plasticity, can account for the characteristics of behavior in free operant experiments. We compare a specific instance of such a model, in which two recurrently excited populations of neurons compete for higher activity, to the behavior of rats responding on two levers for rewarding brain stimulation on a concurrent variable interval reward schedule (Gallistel et al., 2001). We show that the model is consistent with the rats' behavior, and in particular, with the observed fast adaptation to matching behavior. Further, we show that the neural model can be reduced to a behavioral model, and we use this model to deduce a novel "conservation law," which is consistent with the behavior of the rats.

  20. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  1. Profiling Fast Healthcare Interoperability Resources (FHIR) of Family Health History based on the Clinical Element Models

    OpenAIRE

    Lee, Jaehoon; Hulse, Nathan C.; Wood, Grant M.; Oniki, Thomas A.; Huff, Stanley M.

    2017-01-01

    In this study we developed a Fast Healthcare Interoperability Resources (FHIR) profile to support exchanging a full pedigree based family health history (FHH) information across multiple systems and applications used by clinicians, patients, and researchers. We used previously developed clinical element models (CEMs) that are capable of representing the FHH information, and derived essential data elements including attributes, constraints, and value sets. We analyzed gaps between the FHH CEM ...

  2. Fast-slow asymptotics for a Markov chain model of fast sodium current

    Science.gov (United States)

    Starý, Tomáš; Biktashev, Vadim N.

    2017-09-01

    We explore the feasibility of using fast-slow asymptotics to eliminate the computational stiffness of discrete-state, continuous-time deterministic Markov chain models of ionic channels underlying cardiac excitability. We focus on a Markov chain model of fast sodium current, and investigate its asymptotic behaviour with respect to small parameters identified in different ways.

  3. [Estimation of forest canopy chlorophyll content based on PROSPECT and SAIL models].

    Science.gov (United States)

    Yang, Xi-guang; Fan, Wen-yi; Yu, Ying

    2010-11-01

    The forest canopy chlorophyll content directly reflects the health and stress of forest. The accurate estimation of the forest canopy chlorophyll content is a significant foundation for researching forest ecosystem cycle models. In the present paper, the inversion of the forest canopy chlorophyll content was based on PROSPECT and SAIL models from the physical mechanism angle. First, leaf spectrum and canopy spectrum were simulated by PROSPECT and SAIL models respectively. And leaf chlorophyll content look-up-table was established for leaf chlorophyll content retrieval. Then leaf chlorophyll content was converted into canopy chlorophyll content by Leaf Area Index (LAD). Finally, canopy chlorophyll content was estimated from Hyperion image. The results indicated that the main effect bands of chlorophyll content were 400-900 nm, the simulation of leaf and canopy spectrum by PROSPECT and SAIL models fit better with the measured spectrum with 7.06% and 16.49% relative error respectively, the RMSE of LAI inversion was 0. 542 6 and the forest canopy chlorophyll content was estimated better by PROSPECT and SAIL models with precision = 77.02%.

  4. Ecosystem-management-based Management Models of Fast-growing and High-yield Plantation and Its Eco-economic Benefits Analysis

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The paper expounded the basic concept and principles of ecosystem management,and analyzed the state and trend of industrial plantation ecosystem management in other countries.Based on the analysis of typical case studies,the eco-economic benefits were evaluated for the management models of fast-growing and high-yield plantations.

  5. Global Performance of a Fast Parameterization Scheme for Estimating Surface Solar Radiation from MODIS data

    Science.gov (United States)

    Tang, W.; Yang, K.; Sun, Z.; Qin, J.; Niu, X.

    2016-12-01

    A fast parameterization scheme named SUNFLUX is used in this study to estimate instantaneous surface solar radiation (SSR) based on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard both Terra and Aqua platforms. The scheme mainly takes into account the absorption and scattering processes due to clouds, aerosols and gas in the atmosphere. The estimated instantaneous SSR is evaluated against surface observations obtained from seven stations of the Surface Radiation Budget Network (SURFRAD), four stations in the North China Plain (NCP) and 40 stations of the Baseline Surface Radiation Network (BSRN). The statistical results for evaluation against these three datasets show that the relative root-mean-square error (RMSE) values of SUNFLUX are less than 15%, 16% and 17%, respectively. Daily SSR is derived through temporal upscaling from the MODIS-based instantaneous SSR estimates, and is validated against surface observations. The relative RMSE values for daily SSR estimates are about 16% at the seven SURFRAD stations, four NCP stations, 40 BSRN stations and 90 China Meteorological Administration (CMA) radiation stations.

  6. Parameter estimation methods for gene circuit modeling from time-series mRNA data: a comparative study.

    Science.gov (United States)

    Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin

    2015-11-01

    Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Observer-Based Human Knee Stiffness Estimation.

    Science.gov (United States)

    Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen

    2017-05-01

    We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.

  8. Using optical remote sensing model to estimate oil slick thickness based on satellite image

    International Nuclear Information System (INIS)

    Lu, Y C; Tian, Q J; Lyu, C G; Fu, W X; Han, W C

    2014-01-01

    An optical remote sensing model has been established based on two-beam interference theory to estimate marine oil slick thickness. Extinction coefficient and normalized reflectance of oil are two important parts in this model. Extinction coefficient is an important inherent optical property and will not vary with the background reflectance changed. Normalized reflectance can be used to eliminate the background differences between in situ measured spectra and remotely sensing image. Therefore, marine oil slick thickness and area can be estimated and mapped based on optical remotely sensing image and extinction coefficient

  9. A fast and reliable method for simultaneous waveform, amplitude and latency estimation of single-trial EEG/MEG data.

    Directory of Open Access Journals (Sweden)

    Wouter D Weeda

    Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.

  10. Environmental risk assessment of selected organic chemicals based on TOC test and QSAR estimation models.

    Science.gov (United States)

    Chi, Yulang; Zhang, Huanteng; Huang, Qiansheng; Lin, Yi; Ye, Guozhu; Zhu, Huimin; Dong, Sijun

    2018-02-01

    Environmental risks of organic chemicals have been greatly determined by their persistence, bioaccumulation, and toxicity (PBT) and physicochemical properties. Major regulations in different countries and regions identify chemicals according to their bioconcentration factor (BCF) and octanol-water partition coefficient (Kow), which frequently displays a substantial correlation with the sediment sorption coefficient (Koc). Half-life or degradability is crucial for the persistence evaluation of chemicals. Quantitative structure activity relationship (QSAR) estimation models are indispensable for predicting environmental fate and health effects in the absence of field- or laboratory-based data. In this study, 39 chemicals of high concern were chosen for half-life testing based on total organic carbon (TOC) degradation, and two widely accepted and highly used QSAR estimation models (i.e., EPI Suite and PBT Profiler) were adopted for environmental risk evaluation. The experimental results and estimated data, as well as the two model-based results were compared, based on the water solubility, Kow, Koc, BCF and half-life. Environmental risk assessment of the selected compounds was achieved by combining experimental data and estimation models. It was concluded that both EPI Suite and PBT Profiler were fairly accurate in measuring the physicochemical properties and degradation half-lives for water, soil, and sediment. However, the half-lives between the experimental and the estimated results were still not absolutely consistent. This suggests deficiencies of the prediction models in some ways, and the necessity to combine the experimental data and predicted results for the evaluation of environmental fate and risks of pollutants. Copyright © 2016. Published by Elsevier B.V.

  11. Fast Reactor Fuel Cycle Cost Estimates for Advanced Fuel Cycle Studies

    International Nuclear Information System (INIS)

    Harrison, Thomas

    2013-01-01

    Presentation Outline: • Why Do I Need a Cost Basis?; • History of the Advanced Fuel Cycle Cost Basis; • Description of the Cost Basis; • Current Work; • Fast Reactor Fuel Cycle Applications; • Sample Fuel Cycle Cost Estimate Analysis; • Future Work

  12. Limited information estimation of the diffusion-based item response theory model for responses and response times.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2016-05-01

    Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.

  13. The association between estimated average glucose levels and fasting plasma glucose levels

    Directory of Open Access Journals (Sweden)

    Giray Bozkaya

    2010-01-01

    Full Text Available OBJECTIVE: The level of hemoglobin A1c (HbA1c, also known as glycated hemoglobin, determines how well a patient's blood glucose level has been controlled over the previous 8-12 weeks. HbA1c levels help patients and doctors understand whether a particular diabetes treatment is working and whether adjustments need to be made to the treatment. Because the HbA1c level is a marker of blood glucose for the previous 120 days, average blood glucose levels can be estimated using HbA1c levels. Our aim in the present study was to investigate the relationship between estimated average glucose levels, as calculated by HbA1c levels, and fasting plasma glucose levels. METHODS: The fasting plasma glucose levels of 3891 diabetic patient samples (1497 male, 2394 female were obtained from the laboratory information system used for HbA1c testing by the Department of Internal Medicine at the Izmir Bozyaka Training and Research Hospital in Turkey. These samples were selected from patient samples that had hemoglobin levels between 12 and 16 g/dL. The estimated glucose levels were calculated using the following formula: 28.7 x HbA1c - 46.7. Glucose and HbA1c levels were determined using hexokinase and high performance liquid chromatography (HPLC methods, respectively. RESULTS: A strong positive correlation between fasting plasma glucose levels and estimated average blood glucose levels (r=0.757, p<0.05 was observed. The difference was statistically significant. CONCLUSION: Reporting the estimated average glucose level together with the HbA1c level is believed to assist patients and doctors determine the effectiveness of blood glucose control measures.

  14. Semiclassical model of cross section for fast neutrons

    International Nuclear Information System (INIS)

    Rosato, A.; D'Oliveira, A.A.

    1977-01-01

    A study for main aspects of fast neutron scattering is presented and, a semiclassical approximation applying to several pratic cases is described. The obtained results are compared with experimental data for deformed nuclei, and, with theoretical data based on optical model without treatment of deformations. (M.C.K.) [pt

  15. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    Energy Technology Data Exchange (ETDEWEB)

    Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Santos-Villalobos, Hector J [ORNL; Thompson, Joseph W [ORNL; Bolme, David S [ORNL; Boehnen, Chris Bensing [ORNL

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  16. Min-max Extrapolation Scheme for Fast Estimation of 3D Potts Field Partition Functions. Application to the Joint Detection-Estimation of Brain Activity in fMRI

    International Nuclear Information System (INIS)

    Risser, L.; Vincent, T.; Ciuciu, P.; Risser, L.; Idier, J.; Risser, L.; Forbes, F.

    2011-01-01

    In this paper, we propose a fast numerical scheme to estimate Partition Functions (PF) of symmetric Potts fields. Our strategy is first validated on 2D two-color Potts fields and then on 3D two- and three-color Potts fields. It is then applied to the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated, deactivated and inactivated brain regions and to estimate region dependent hemodynamic filters. For any brain region, a specific 3D Potts field indeed embodies the spatial correlation over the hidden states of the voxels by modeling whether they are activated, deactivated or inactive. To make spatial regularization adaptive, the PFs of the Potts fields over all brain regions are computed prior to the brain activity estimation. Our approach is first based upon a classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, we propose an extrapolation method that allows us to approximate the PFs associated to the Potts fields defined over the remaining brain regions. In comparison with preexisting methods either based on a path sampling strategy or mean-field approximations, our contribution strongly alleviates the computational cost and makes spatially adaptive regularization of whole brain fMRI datasets feasible. It is also robust against grid inhomogeneities and efficient irrespective of the topological configurations of the brain regions. (authors)

  17. Improved regression models for ventilation estimation based on chest and abdomen movements

    International Nuclear Information System (INIS)

    Liu, Shaopeng; Gao, Robert; He, Qingbo; Staudenmayer, John; Freedson, Patty

    2012-01-01

    Non-invasive estimation of minute ventilation is important for quantifying the intensity of physical activity of individuals. In this paper, several improved regression models are presented, based on the measurement of chest and abdomen movements from sensor belts worn by subjects (n = 50) engaged in 14 types of physical activity. Five linear models involving a combination of 11 features were developed, and the effects of different model training approaches and window sizes for computing the features were investigated. The performance of the models was evaluated using experimental data collected during the physical activity protocol. The predicted minute ventilation was compared to the criterion ventilation measured using a bidirectional digital volume transducer housed in a respiratory gas exchange system. The results indicate that the inclusion of breathing frequency and the use of percentile points instead of interdecile ranges over a 60 s window size reduced error by about 43%, when applied to the classical two-degrees-of-freedom model. The mean percentage error of the minute ventilation estimated for all the activities was below 7.5%, verifying reasonably good performance of the models and the applicability of the wearable sensing system for minute ventilation estimation during physical activity. (paper)

  18. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model

    Energy Technology Data Exchange (ETDEWEB)

    He, Baochun; Huang, Cheng; Zhou, Shoujun; Hu, Qingmao; Jia, Fucang, E-mail: fc.jia@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055 (China); Sharp, Gregory [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Fang, Chihua; Fan, Yingfang [Department of Hepatology (I), Zhujiang Hospital, Southern Medical University, Guangzhou 510280 (China)

    2016-05-15

    Purpose: A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. Methods: The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods—3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration—are used to establish shape correspondence. Results: The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. Conclusions: The proposed automatic approach

  19. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model.

    Science.gov (United States)

    He, Baochun; Huang, Cheng; Sharp, Gregory; Zhou, Shoujun; Hu, Qingmao; Fang, Chihua; Fan, Yingfang; Jia, Fucang

    2016-05-01

    A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods-3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration-are used to establish shape correspondence. The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. The proposed automatic approach achieves robust, accurate, and fast liver

  20. A Modelling Framework for estimating Road Segment Based On-Board Vehicle Emissions

    International Nuclear Information System (INIS)

    Lin-Jun, Yu; Ya-Lan, Liu; Yu-Huan, Ren; Zhong-Ren, Peng; Meng, Liu Meng

    2014-01-01

    Traditional traffic emission inventory models aim to provide overall emissions at regional level which cannot meet planners' demand for detailed and accurate traffic emissions information at the road segment level. Therefore, a road segment-based emission model for estimating light duty vehicle emissions is proposed, where floating car technology is used to collect information of traffic condition of roads. The employed analysis framework consists of three major modules: the Average Speed and the Average Acceleration Module (ASAAM), the Traffic Flow Estimation Module (TFEM) and the Traffic Emission Module (TEM). The ASAAM is used to obtain the average speed and the average acceleration of the fleet on each road segment using FCD. The TFEM is designed to estimate the traffic flow of each road segment in a given period, based on the speed-flow relationship and traffic flow spatial distribution. Finally, the TEM estimates emissions from each road segment, based on the results of previous two modules. Hourly on-road light-duty vehicle emissions for each road segment in Shenzhen's traffic network are obtained using this analysis framework. The temporal-spatial distribution patterns of the pollutant emissions of road segments are also summarized. The results show high emission road segments cluster in several important regions in Shenzhen. Also, road segments emit more emissions during rush hours than other periods. The presented case study demonstrates that the proposed approach is feasible and easy-to-use to help planners make informed decisions by providing detailed road segment-based emission information

  1. A NEW ELECTRON-DENSITY MODEL FOR ESTIMATION OF PULSAR AND FRB DISTANCES

    Energy Technology Data Exchange (ETDEWEB)

    Yao, J. M.; Wang, N. [Xinjiang Astronomical Observatory, Chinese Academy of Sciences, 150, Science 1-Street, Urumqi, Xinjiang 830011 (China); Manchester, R. N. [CSIRO Astronomy and Space Science, Australia Telescope National Facility, P.O. Box 76, Epping NSW 1710 (Australia)

    2017-01-20

    We present a new model for the distribution of free electrons in the Galaxy, the Magellanic Clouds, and the intergalactic medium (IGM) that can be used to estimate distances to real or simulated pulsars and fast radio bursts (FRBs) based on their dispersion measure (DM). The Galactic model has an extended thick disk representing the so-called warm interstellar medium, a thin disk representing the Galactic molecular ring, spiral arms based on a recent fit to Galactic H ii regions, a Galactic Center disk, and seven local features including the Gum Nebula, Galactic Loop I, and the Local Bubble. An offset of the Sun from the Galactic plane and a warp of the outer Galactic disk are included in the model. Parameters of the Galactic model are determined by fitting to 189 pulsars with independently determined distances and DMs. Simple models are used for the Magellanic Clouds and the IGM. Galactic model distances are within the uncertainty range for 86 of the 189 independently determined distances and within 20% of the nearest limit for a further 38 pulsars. We estimate that 95% of predicted Galactic pulsar distances will have a relative error of less than a factor of 0.9. The predictions of YMW16 are compared to those of the TC93 and NE2001 models showing that YMW16 performs significantly better on all measures. Timescales for pulse broadening due to interstellar scattering are estimated for (real or simulated) Galactic and Magellanic Cloud pulsars and FRBs.

  2. A NEW ELECTRON-DENSITY MODEL FOR ESTIMATION OF PULSAR AND FRB DISTANCES

    International Nuclear Information System (INIS)

    Yao, J. M.; Wang, N.; Manchester, R. N.

    2017-01-01

    We present a new model for the distribution of free electrons in the Galaxy, the Magellanic Clouds, and the intergalactic medium (IGM) that can be used to estimate distances to real or simulated pulsars and fast radio bursts (FRBs) based on their dispersion measure (DM). The Galactic model has an extended thick disk representing the so-called warm interstellar medium, a thin disk representing the Galactic molecular ring, spiral arms based on a recent fit to Galactic H ii regions, a Galactic Center disk, and seven local features including the Gum Nebula, Galactic Loop I, and the Local Bubble. An offset of the Sun from the Galactic plane and a warp of the outer Galactic disk are included in the model. Parameters of the Galactic model are determined by fitting to 189 pulsars with independently determined distances and DMs. Simple models are used for the Magellanic Clouds and the IGM. Galactic model distances are within the uncertainty range for 86 of the 189 independently determined distances and within 20% of the nearest limit for a further 38 pulsars. We estimate that 95% of predicted Galactic pulsar distances will have a relative error of less than a factor of 0.9. The predictions of YMW16 are compared to those of the TC93 and NE2001 models showing that YMW16 performs significantly better on all measures. Timescales for pulse broadening due to interstellar scattering are estimated for (real or simulated) Galactic and Magellanic Cloud pulsars and FRBs.

  3. Calculational model based on influence function method for power distribution and control rod worth in fast reactors

    International Nuclear Information System (INIS)

    Toshio, S.; Kazuo, A.

    1983-01-01

    A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: 1. Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. 2. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. 3. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size

  4. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  5. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  6. An estimation framework for building information modeling (BIM)-based demolition waste by type.

    Science.gov (United States)

    Kim, Young-Chan; Hong, Won-Hwa; Park, Jae-Woo; Cha, Gi-Wook

    2017-12-01

    Most existing studies on demolition waste (DW) quantification do not have an official standard to estimate the amount and type of DW. Therefore, there are limitations in the existing literature for estimating DW with a consistent classification system. Building information modeling (BIM) is a technology that can generate and manage all the information required during the life cycle of a building, from design to demolition. Nevertheless, there has been a lack of research regarding its application to the demolition stage of a building. For an effective waste management plan, the estimation of the type and volume of DW should begin from the building design stage. However, the lack of tools hinders an early estimation. This study proposes a BIM-based framework that estimates DW in the early design stages, to achieve an effective and streamlined planning, processing, and management. Specifically, the input of construction materials in the Korean construction classification system and those in the BIM library were matched. Based on this matching integration, the estimates of DW by type were calculated by applying the weight/unit volume factors and the rates of DW volume change. To verify the framework, its operation was demonstrated by means of an actual BIM modeling and by comparing its results with those available in the literature. This study is expected to contribute not only to the estimation of DW at the building level, but also to the automated estimation of DW at the district level.

  7. Enhanced Model for Fast Ignition

    Energy Technology Data Exchange (ETDEWEB)

    Mason, Rodney J. [Research Applications Corporation, Los Alamos, NM (United States)

    2010-10-12

    Laser Fusion is a prime candidate for alternate energy production, capable of serving a major portion of the nation's energy needs, once fusion fuel can be readily ignited. Fast Ignition may well speed achievement of this goal, by reducing net demands on laser pulse energy and timing precision. However, Fast Ignition has presented a major challenge to modeling. This project has enhanced the computer code ePLAS for the simulation of the many specialized phenomena, which arise with Fast Ignition. The improved code has helped researchers to understand better the consequences of laser absorption, energy transport, and laser target hydrodynamics. ePLAS uses efficient implicit methods to acquire solutions for the electromagnetic fields that govern the accelerations of electrons and ions in targets. In many cases, the code implements fluid modeling for these components. These combined features, "implicitness and fluid modeling," can greatly facilitate calculations, permitting the rapid scoping and evaluation of experiments. ePLAS can be used on PCs, Macs and Linux machines, providing researchers and students with rapid results. This project has improved the treatment of electromagnetics, hydrodynamics, and atomic physics in the code. It has simplified output graphics, and provided new input that avoids the need for source code access by users. The improved code can now aid university, business and national laboratory users in pursuit of an early path to success with Fast Ignition.

  8. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

  9. Estimation of the applicability domain of kernel-based machine learning models for virtual screening

    Directory of Open Access Journals (Sweden)

    Fechner Nikolas

    2010-03-01

    Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening

  10. Estimation of the applicability domain of kernel-based machine learning models for virtual screening.

    Science.gov (United States)

    Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas

    2010-03-11

    The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations

  11. Estimation of Antarctic Land-Fast Sea Ice Algal Biomass and Snow Thickness From Under-Ice Radiance Spectra in Two Contrasting Areas

    Science.gov (United States)

    Wongpan, P.; Meiners, K. M.; Langhorne, P. J.; Heil, P.; Smith, I. J.; Leonard, G. H.; Massom, R. A.; Clementson, L. A.; Haskell, T. G.

    2018-03-01

    Fast ice is an important component of Antarctic coastal marine ecosystems, providing a prolific habitat for ice algal communities. This work examines the relationships between normalized difference indices (NDI) calculated from under-ice radiance measurements and sea ice algal biomass and snow thickness for Antarctic fast ice. While this technique has been calibrated to assess biomass in Arctic fast ice and pack ice, as well as Antarctic pack ice, relationships are currently lacking for Antarctic fast ice characterized by bottom ice algae communities with high algal biomass. We analyze measurements along transects at two contrasting Antarctic fast ice sites in terms of platelet ice presence: near and distant from an ice shelf, i.e., in McMurdo Sound and off Davis Station, respectively. Snow and ice thickness, and ice salinity and temperature measurements support our paired in situ optical and biological measurements. Analyses show that NDI wavelength pairs near the first chlorophyll a (chl a) absorption peak (≈440 nm) explain up to 70% of the total variability in algal biomass. Eighty-eight percent of snow thickness variability is explained using an NDI with a wavelength pair of 648 and 567 nm. Accounting for pigment packaging effects by including the ratio of chl a-specific absorption coefficients improved the NDI-based algal biomass estimation only slightly. Our new observation-based algorithms can be used to estimate Antarctic fast ice algal biomass and snow thickness noninvasively, for example, by using moored sensors (time series) or mapping their spatial distributions using underwater vehicles.

  12. FAST goes underground

    International Nuclear Information System (INIS)

    Fridlund, P.S.

    1985-01-01

    The FAST-M Cost Estimating Model is a parametric model designed to determine the costs associated with mining and subterranean operations. It is part of the FAST (Freiman Analysis of Systems Techniques) series of parametric models developed by Freiman Parametric Systems, Inc. The rising cost of fossil fuels has created a need for a method which could be used to determine and control costs in mining and subterranean operations. FAST-M fills this need and also provides scheduling information. The model works equally well for a variety of situations including underground vaults for hazardous waste storage, highway tunnels, and mass transit tunnels. In addition, costs for above ground structures and equipment can be calculated. The input for the model may be on a macro or a micro level. This allows the model to be used at various stages in a project. On the macro level, only general conditions and specifications need to be known. On the micro level, the smallest details may be included. As with other FAST models, reference cases are used to more accurately predict costs and scheduling. This paper will address how the model can be used for a variety of subterranean purposes

  13. The Fast Theater Model (FATHM)

    National Research Council Canada - National Science Library

    Brown, Gerald G; Washburn, Alan R

    2002-01-01

    The Fast Theater Model (FATHM) is an aggregated joint theater combat model that fuses Air Force Air-to-Ground attack sortie optimization with Ground-to-Ground deterministic Lanchester fire-exchange battles using attrition rates...

  14. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    Science.gov (United States)

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  15. Fast filtering algorithm based on vibration systems and neural information exchange and its application to micro motion robot

    International Nuclear Information System (INIS)

    Gao Wa; Zha Fu-Sheng; Li Man-Tian; Song Bao-Yu

    2014-01-01

    This paper develops a fast filtering algorithm based on vibration systems theory and neural information exchange approach. The characters, including the derivation process and parameter analysis, are discussed and the feasibility and the effectiveness are testified by the filtering performance compared with various filtering methods, such as the fast wavelet transform algorithm, the particle filtering method and our previously developed single degree of freedom vibration system filtering algorithm, according to simulation and practical approaches. Meanwhile, the comparisons indicate that a significant advantage of the proposed fast filtering algorithm is its extremely fast filtering speed with good filtering performance. Further, the developed fast filtering algorithm is applied to the navigation and positioning system of the micro motion robot, which is a high real-time requirement for the signals preprocessing. Then, the preprocessing data is used to estimate the heading angle error and the attitude angle error of the micro motion robot. The estimation experiments illustrate the high practicality of the proposed fast filtering algorithm. (general)

  16. Fast Bayesian Inference in Dirichlet Process Mixture Models.

    Science.gov (United States)

    Wang, Lianming; Dunson, David B

    2011-01-01

    There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.

  17. Model methodology for estimating pesticide concentration extremes based on sparse monitoring data

    Science.gov (United States)

    Vecchia, Aldo V.

    2018-03-22

    This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.

  18. Data Sources for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  19. Uncertainties in neural network model based on carbon dioxide concentration for occupancy estimation

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Azimil Gani; Rahman, Haolia; Kim, Jung-Kyung; Han, Hwataik [Kookmin University, Seoul (Korea, Republic of)

    2017-05-15

    Demand control ventilation is employed to save energy by adjusting airflow rate according to the ventilation load of a building. This paper investigates a method for occupancy estimation by using a dynamic neural network model based on carbon dioxide concentration in an occupied zone. The method can be applied to most commercial and residential buildings where human effluents to be ventilated. An indoor simulation program CONTAMW is used to generate indoor CO{sub 2} data corresponding to various occupancy schedules and airflow patterns to train neural network models. Coefficients of variation are obtained depending on the complexities of the physical parameters as well as the system parameters of neural networks, such as the numbers of hidden neurons and tapped delay lines. We intend to identify the uncertainties caused by the model parameters themselves, by excluding uncertainties in input data inherent in measurement. Our results show estimation accuracy is highly influenced by the frequency of occupancy variation but not significantly influenced by fluctuation in the airflow rate. Furthermore, we discuss the applicability and validity of the present method based on passive environmental conditions for estimating occupancy in a room from the viewpoint of demand control ventilation applications.

  20. Evaluation of fasting plasma insulin concentration as an estimate of insulin action in nondiabetic individuals: comparison with the homeostasis model assessment of insulin resistance (HOMA-IR).

    Science.gov (United States)

    Abbasi, Fahim; Okeke, QueenDenise; Reaven, Gerald M

    2014-04-01

    Insulin-mediated glucose disposal varies severalfold in apparently healthy individuals, and approximately one-third of the most insulin resistant of these individuals is at increased risk to develop various adverse clinical syndromes. Since direct measurements of insulin sensitivity are not practical in a clinical setting, several surrogate estimates of insulin action have been proposed, including fasting plasma insulin (FPI) concentration and the homeostasis model assessment of insulin resistance (HOMA-IR) calculated by a formula employing fasting plasma glucose (FPG) and FPI concentrations. The objective of this study was to compare FPI as an estimate of insulin-mediated glucose disposal with values generated by HOMA-IR in 758 apparently healthy nondiabetic individuals. Measurements were made of FPG, FPI, triglyceride (TG), and high-density lipoprotein cholesterol (HDL-C) concentrations, and insulin-mediated glucose uptake was quantified by determining steady-state plasma glucose (SSPG) concentration during the insulin suppression test. FPI and HOMA-IR were highly correlated (r = 0.98, P HOMA-IR (r = 0.64). Furthermore, the relationship between FPI and TG (r = 0.35) and HDL-C (r = -0.40) was comparable to that between HOMA-IR and TG (r = 0.39) and HDL-C (r = -0.41). In conclusion, FPI and HOMA-IR are highly correlated in nondiabetic individuals, with each estimate accounting for ~40% of the variability (variance) in a direct measure of insulin-mediated glucose disposal. Calculation of HOMA-IR does not provide a better surrogate estimate of insulin action, or of its associated dyslipidemia, than measurement of FPI.

  1. The relative pose estimation of aircraft based on contour model

    Science.gov (United States)

    Fu, Tai; Sun, Xiangyi

    2017-02-01

    This paper proposes a relative pose estimation approach based on object contour model. The first step is to obtain a two-dimensional (2D) projection of three-dimensional (3D)-model-based target, which will be divided into 40 forms by clustering and LDA analysis. Then we proceed by extracting the target contour in each image and computing their Pseudo-Zernike Moments (PZM), thus a model library is constructed in an offline mode. Next, we spot a projection contour that resembles the target silhouette most in the present image from the model library with reference of PZM; then similarity transformation parameters are generated as the shape context is applied to match the silhouette sampling location, from which the identification parameters of target can be further derived. Identification parameters are converted to relative pose parameters, in the premise that these values are the initial result calculated via iterative refinement algorithm, as the relative pose parameter is in the neighborhood of actual ones. At last, Distance Image Iterative Least Squares (DI-ILS) is employed to acquire the ultimate relative pose parameters.

  2. A case study to estimate costs using Neural Networks and regression based models

    Directory of Open Access Journals (Sweden)

    Nadia Bhuiyan

    2012-07-01

    Full Text Available Bombardier Aerospace’s high performance aircrafts and services set the utmost standard for the Aerospace industry. A case study in collaboration with Bombardier Aerospace is conducted in order to estimate the target cost of a landing gear. More precisely, the study uses both parametric model and neural network models to estimate the cost of main landing gears, a major aircraft commodity. A comparative analysis between the parametric based model and those upon neural networks model will be considered in order to determine the most accurate method to predict the cost of a main landing gear. Several trials are presented for the design and use of the neural network model. The analysis for the case under study shows the flexibility in the design of the neural network model. Furthermore, the performance of the neural network model is deemed superior to the parametric models for this case study.

  3. Structural observability analysis and EKF based parameter estimation of building heating models

    Directory of Open Access Journals (Sweden)

    D.W.U. Perera

    2016-07-01

    Full Text Available Research for enhanced energy-efficient buildings has been given much recognition in the recent years owing to their high energy consumptions. Increasing energy needs can be precisely controlled by practicing advanced controllers for building Heating, Ventilation, and Air-Conditioning (HVAC systems. Advanced controllers require a mathematical building heating model to operate, and these models need to be accurate and computationally efficient. One main concern associated with such models is the accurate estimation of the unknown model parameters. This paper presents the feasibility of implementing a simplified building heating model and the computation of physical parameters using an off-line approach. Structural observability analysis is conducted using graph-theoretic techniques to analyze the observability of the developed system model. Then Extended Kalman Filter (EKF algorithm is utilized for parameter estimates using the real measurements of a single-zone building. The simulation-based results confirm that even with a simple model, the EKF follows the state variables accurately. The predicted parameters vary depending on the inputs and disturbances.

  4. Fast analytical model of MZI micro-opto-mechanical pressure sensor

    Science.gov (United States)

    Rochus, V.; Jansen, R.; Goyvaerts, J.; Neutens, P.; O’Callaghan, J.; Rottenberg, X.

    2018-06-01

    This paper presents a fast analytical procedure in order to design a micro-opto-mechanical pressure sensor (MOMPS) taking into account the mechanical nonlinearity and the optical losses. A realistic model of the photonic MZI is proposed, strongly coupled to a nonlinear mechanical model of the membrane. Based on the membrane dimensions, the residual stress, the position of the waveguide, the optical wavelength and the phase variation due to the opto-mechanical coupling, we derive an analytical model which allows us to predict the response of the total system. The effect of the nonlinearity and the losses on the total performance are carefully studied and measurements on fabricated devices are used to validate the model. Finally, a design procedure is proposed in order to realize fast design of this new type of pressure sensor.

  5. A Mixing Based Model for DME Combustion in Diesel Engines

    DEFF Research Database (Denmark)

    Bek, Bjarne H.; Sorenson, Spencer C.

    1998-01-01

    A series of studies has been conducted investigating the behavior of di-methyl ether (DME) fuel jets injected into quiescent combus-tion chambers. These studies have shown that it is possible to make a good estimate of the penetration of the jet based on existing correlations for diesel fuel......, by using appropriate fuel properties. The results of the spray studies have been incorporated into a first generation model for DME combustion. The model is entirely based on physical mixing, where chemical processes have been assumed to be very fast in relation to mixing. The assumption was made...

  6. A mixing based model for DME combustion in diesel engines

    DEFF Research Database (Denmark)

    Bek, Bjarne Hjort; Sorenson, Spencer C

    2001-01-01

    A series of studies has been conducted investigating the behavior of di-methyl ether (DME) fuel jets injected into quiescent combustion chambers. These studies have shown that it is possible to make a good estimate of the penetration of the jet based on existing correlations for diesel fuel......, by using appropriate fuel properties. The results of the spray studies have been incorporated into a first generation model for DME combustion. The model is entirely based on physical mixing, where chemical processes have been assumed to be very fast in relation to mixing. The assumption was made...

  7. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    Science.gov (United States)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  8. A kernel principal component analysis–based degradation model and remaining useful life estimation for the turbofan engine

    Directory of Open Access Journals (Sweden)

    Delong Feng

    2016-05-01

    Full Text Available Remaining useful life estimation of the prognostics and health management technique is a complicated and difficult research question for maintenance. In this article, we consider the problem of prognostics modeling and estimation of the turbofan engine under complicated circumstances and propose a kernel principal component analysis–based degradation model and remaining useful life estimation method for such aircraft engine. We first analyze the output data created by the turbofan engine thermodynamic simulation that is based on the kernel principal component analysis method and then distinguish the qualitative and quantitative relationships between the key factors. Next, we build a degradation model for the engine fault based on the following assumptions: the engine has only had constant failure (i.e. no sudden failure is included, and the engine has a Wiener process, which is a covariate stand for the engine system drift. To predict the remaining useful life of the turbofan engine, we built a health index based on the degradation model and used the method of maximum likelihood and the data from the thermodynamic simulation model to estimate the parameters of this degradation model. Through the data analysis, we obtained a trend model of the regression curve line that fits with the actual statistical data. Based on the predicted health index model and the data trend model, we estimate the remaining useful life of the aircraft engine as the index reaches zero. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this prediction method that we propose. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this proposed method, the precision of the method can reach to 98.9% and the average precision is 95.8%.

  9. Methodology for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.

  10. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  11. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  12. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-01-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT-based

  13. Mathematical modeling for corrosion environment estimation based on concrete resistivity measurement directly above reinforcement

    International Nuclear Information System (INIS)

    Lim, Young-Chul; Lee, Han-Seung; Noguchi, Takafumi

    2009-01-01

    This study aims to formulate a resistivity model whereby the concrete resistivity expressing the environment of steel reinforcement can be directly estimated and evaluated based on measurement immediately above reinforcement as a method of evaluating corrosion deterioration in reinforced concrete structures. It also aims to provide a theoretical ground for the feasibility of durability evaluation by electric non-destructive techniques with no need for chipping of cover concrete. This Resistivity Estimation Model (REM), which is a mathematical model using the mirror method, combines conventional four-electrode measurement of resistivity with geometric parameters including cover depth, bar diameter, and electrode intervals. This model was verified by estimation using this model at areas directly above reinforcement and resistivity measurement at areas unaffected by reinforcement in regard to the assessment of the concrete resistivity. Both results strongly correlated, proving the validity of this model. It is expected to be applicable to laboratory study and field diagnosis regarding reinforcement corrosion. (author)

  14. Application of fast orthogonal search to linear and nonlinear stochastic systems

    DEFF Research Database (Denmark)

    Chon, K H; Korenberg, M J; Holstein-Rathlou, N H

    1997-01-01

    Standard deterministic autoregressive moving average (ARMA) models consider prediction errors to be unexplainable noise sources. The accuracy of the estimated ARMA model parameters depends on producing minimum prediction errors. In this study, an accurate algorithm is developed for estimating...... linear and nonlinear stochastic ARMA model parameters by using a method known as fast orthogonal search, with an extended model containing prediction errors as part of the model estimation process. The extended algorithm uses fast orthogonal search in a two-step procedure in which deterministic terms...

  15. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  16. FAST: FAST Analysis of Sequences Toolbox

    Directory of Open Access Journals (Sweden)

    Travis J. Lawrence

    2015-05-01

    Full Text Available FAST (FAST Analysis of Sequences Toolbox provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU’s Not Unix Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics makes FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format. Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought.

  17. Basis expansion model for channel estimation in LTE-R communication system

    Directory of Open Access Journals (Sweden)

    Ling Deng

    2016-05-01

    Full Text Available This paper investigates fast time-varying channel estimation in LTE-R communication systems. The Basis Expansion Model (BEM is adopted to fit the fast time-varying channel in a high-speed railway communication scenario. The channel impulse response is modeled as the sum of basis functions multiplied by different coefficients. The optimal coefficients are obtained by theoretical analysis. Simulation results show that a Generalized Complex-Exponential BEM (GCE-BEM outperforms a Complex-Exponential BEM (CE-BEM and a polynomial BEM in terms of Mean Squared Error (MSE. Besides, the MSE of the CE-BEM decreases gradually as the number of basis functions increases. The GCE-BEM has a satisfactory performance with the serious fading channel.

  18. Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2015-01-01

    Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.

  19. Allometric Models Based on Bayesian Frameworks Give Better Estimates of Aboveground Biomass in the Miombo Woodlands

    Directory of Open Access Journals (Sweden)

    Shem Kuyah

    2016-02-01

    Full Text Available The miombo woodland is the most extensive dry forest in the world, with the potential to store substantial amounts of biomass carbon. Efforts to obtain accurate estimates of carbon stocks in the miombo woodlands are limited by a general lack of biomass estimation models (BEMs. This study aimed to evaluate the accuracy of most commonly employed allometric models for estimating aboveground biomass (AGB in miombo woodlands, and to develop new models that enable more accurate estimation of biomass in the miombo woodlands. A generalizable mixed-species allometric model was developed from 88 trees belonging to 33 species ranging in diameter at breast height (DBH from 5 to 105 cm using Bayesian estimation. A power law model with DBH alone performed better than both a polynomial model with DBH and the square of DBH, and models including height and crown area as additional variables along with DBH. The accuracy of estimates from published models varied across different sites and trees of different diameter classes, and was lower than estimates from our model. The model developed in this study can be used to establish conservative carbon stocks required to determine avoided emissions in performance-based payment schemes, for example in afforestation and reforestation activities.

  20. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    Science.gov (United States)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique

  1. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    Science.gov (United States)

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  2. Fast neutron dosimeter with wide base silicon diode

    International Nuclear Information System (INIS)

    Ma Lu

    1986-01-01

    This paper briefly introduces a wide base silicon diode fast neutron dosimeter with wide measuring range and good energy response to fast neutron. It is suitable to be used to detect fast neutrons in the mixed field of γ-ray, thermal neutrons and fast neutrons

  3. Estimation of landfill emission lifespan using process oriented modeling

    International Nuclear Information System (INIS)

    Ustohalova, Veronika; Ricken, Tim; Widmann, Renatus

    2006-01-01

    Depending on the particular pollutants emitted, landfills may require service activities lasting from hundreds to thousands of years. Flexible tools allowing long-term predictions of emissions are of key importance to determine the nature and expected duration of maintenance and post-closure activities. A highly capable option represents predictions based on models and verified by experiments that are fast, flexible and allow for the comparison of various possible operation scenarios in order to find the most appropriate one. The intention of the presented work was to develop a experimentally verified multi-dimensional predictive model capable of quantifying and estimating processes taking place in landfill sites where coupled process description allows precise time and space resolution. This constitutive 2-dimensional model is based on the macromechanical theory of porous media (TPM) for a saturated thermo-elastic porous body. The model was used to simulate simultaneously occurring processes: organic phase transition, gas emissions, heat transport, and settlement behavior on a long time scale for municipal solid waste deposited in a landfill. The relationships between the properties (composition, pore structure) of a landfill and the conversion and multi-phase transport phenomena inside it were experimentally determined. In this paper, we present both the theoretical background of the model and the results of the simulations at one single point as well as in a vertical landfill cross section

  4. The practical model of electron emission in the radioisotope battery by fast ions

    International Nuclear Information System (INIS)

    Erokhine, N.S.; Balebanov, V.M.

    2003-01-01

    Under the theoretical analysis of secondary-emission radioisotope source of current the estimate of energy spectrum F(E) of secondary electrons with energy E emitted from films is the important problem. This characteristic knowledge allows, in particular, studying the volt-ampere function, the dependence of electric power deposited in the load on the system parameters and so on. Since the rigorous calculations of energy spectrum F(E) are the complicated enough and labour-intensive there is necessity to elaborate the practical model which allows by the simple computer routine on the basis of generalized data (both experimental measurements and theoretical calculations) on the stopping powers and mean free path of suprathermal electrons to perform reliable express-estimates of the energy spectrum F(E) and the volt-ampere function I(V) for the concrete materials of battery emitter films. This paper devoted to description of of the practical model to calculate electron emission characteristics under the passage of fast ion fluxes from the radioisotope source through the battery emitter. The analytical approximations for the stopping power of emitter materials, the electron inelastic mean free path, the ion production of fast electrons and the probability for them to arrive the film surface are taken into account. In the cases of copper and gold films, the secondary electron escaping depth, the position of energy spectrum peak are considered in the dependence on surface potential barrier magnitude U. According to our calculations the energy spectrum peak shifted to higher electron energy under the U growth. The model described may be used for express estimates and computer simulations of fast alpha-particles and suprathermal electrons interactions with the solid state plasma of battery emitter films, to study the electron emission layer characteristics including the secondary electron escaping depth, to find the optimum conditions for excitation of nonequilibrium

  5. Estimation of sulfur in coal by fast neutron activation

    International Nuclear Information System (INIS)

    Das, G.C.; Bhattacharyya, P.K.

    1995-01-01

    A simple method is described for estimation of sulfur in coal using fast neutron activation of sulfur, i.e. 32 S(n,p) 32 P and subsequent measurement of 32 P β-activity (1.72 MeV) by a Geiger-Mueller counter. Since the sulfur content of Indian coal ranges from 0.25 to 3%, simulated samples of coal containing sulfur in the range from 0.25 to 3% and common impurities like oxides of aluminium, calcium, iron and silicon have been used to establish the method. (author). 6 refs., 2 figs., 1 tab

  6. An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth

    Science.gov (United States)

    Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge

    2017-01-01

    A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.

  7. Infrared video based gas leak detection method using modified FAST features

    Science.gov (United States)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  8. Promotion and Fast Food Demand: Where's the Beef?

    OpenAIRE

    Richards, Timothy J.; Padilla, Luis

    2007-01-01

    Many believe that fast food promotion is a significant cause of the obesity epidemic in North America. Industry members argue that promotion only reallocates brand shares and does not increase overall demand. This study weighs into the debate by specifying and estimating a discrete/continuous model of fast food restaurant choice and food expenditure that explicitly accounts for both spatial and temporal determinants of demand. Estimates are obtained using a unique panel of Canadian fast food ...

  9. Process-based Cost Estimation for Ramjet/Scramjet Engines

    Science.gov (United States)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  10. Homogenization of the coefficient of diffusion: influence of modelling and of the laplacian for fast power reactors and experimental mockups

    International Nuclear Information System (INIS)

    Gho, C.J.

    1984-10-01

    Neutron transport calculation of reactors is based on the definition of homogeneized cell constants, the diffusion coefficient among others. The formalism of the evaluation of the diffusion coefficient, as also the cell model used may introduced uncertainties in results. The present study allowed to estimate these uncertainties in the case of fast neutron power reactors and criticical mockups. The validation of new simple methods and the definition of references is a consequence of this work [fr

  11. DOA Estimation Based on Real-Valued Cross Correlation Matrix of Coprime Arrays.

    Science.gov (United States)

    Li, Jianfeng; Wang, Feng; Jiang, Defu

    2017-03-20

    A fast direction of arrival (DOA) estimation method using a real-valued cross-correlation matrix (CCM) of coprime subarrays is proposed. Firstly, real-valued CCM with extended aperture is constructed to obtain the signal subspaces corresponding to the two subarrays. By analysing the relationship between the two subspaces, DOA estimations from the two subarrays are simultaneously obtained with automatic pairing. Finally, unique DOA is determined based on the common results from the two subarrays. Compared to partial spectral search (PSS) method and estimation of signal parameter via rotational invariance (ESPRIT) based method for coprime arrays, the proposed algorithm has lower complexity but achieves better DOA estimation performance and handles more sources. Simulation results verify the effectiveness of the approach.

  12. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  13. Fast and accurate phylogenetic reconstruction from high-resolution whole-genome data and a novel robustness estimator.

    Science.gov (United States)

    Lin, Y; Rajan, V; Moret, B M E

    2011-09-01

    The rapid accumulation of whole-genome data has renewed interest in the study of genomic rearrangements. Comparative genomics, evolutionary biology, and cancer research all require models and algorithms to elucidate the mechanisms, history, and consequences of these rearrangements. However, even simple models lead to NP-hard problems, particularly in the area of phylogenetic analysis. Current approaches are limited to small collections of genomes and low-resolution data (typically a few hundred syntenic blocks). Moreover, whereas phylogenetic analyses from sequence data are deemed incomplete unless bootstrapping scores (a measure of confidence) are given for each tree edge, no equivalent to bootstrapping exists for rearrangement-based phylogenetic analysis. We describe a fast and accurate algorithm for rearrangement analysis that scales up, in both time and accuracy, to modern high-resolution genomic data. We also describe a novel approach to estimate the robustness of results-an equivalent to the bootstrapping analysis used in sequence-based phylogenetic reconstruction. We present the results of extensive testing on both simulated and real data showing that our algorithm returns very accurate results, while scaling linearly with the size of the genomes and cubically with their number. We also present extensive experimental results showing that our approach to robustness testing provides excellent estimates of confidence, which, moreover, can be tuned to trade off thresholds between false positives and false negatives. Together, these two novel approaches enable us to attack heretofore intractable problems, such as phylogenetic inference for high-resolution vertebrate genomes, as we demonstrate on a set of six vertebrate genomes with 8,380 syntenic blocks. A copy of the software is available on demand.

  14. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  15. solveME: fast and reliable solution of nonlinear ME models

    DEFF Research Database (Denmark)

    Yang, Laurence; Ma, Ding; Ebrahim, Ali

    2016-01-01

    Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstr......Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic...... reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...

  16. Feasibility Study on Tension Estimation Technique for Hanger Cables Using the FE Model-Based System Identification Method

    Directory of Open Access Journals (Sweden)

    Kyu-Sik Park

    2015-01-01

    Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.

  17. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    Science.gov (United States)

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  18. A fast EM algorithm for BayesA-like prediction of genomic breeding values.

    Directory of Open Access Journals (Sweden)

    Xiaochen Sun

    Full Text Available Prediction accuracies of estimated breeding values for economically important traits are expected to benefit from genomic information. Single nucleotide polymorphism (SNP panels used in genomic prediction are increasing in density, but the Markov Chain Monte Carlo (MCMC estimation of SNP effects can be quite time consuming or slow to converge when a large number of SNPs are fitted simultaneously in a linear mixed model. Here we present an EM algorithm (termed "fastBayesA" without MCMC. This fastBayesA approach treats the variances of SNP effects as missing data and uses a joint posterior mode of effects compared to the commonly used BayesA which bases predictions on posterior means of effects. In each EM iteration, SNP effects are predicted as a linear combination of best linear unbiased predictions of breeding values from a mixed linear animal model that incorporates a weighted marker-based realized relationship matrix. Method fastBayesA converges after a few iterations to a joint posterior mode of SNP effects under the BayesA model. When applied to simulated quantitative traits with a range of genetic architectures, fastBayesA is shown to predict GEBV as accurately as BayesA but with less computing effort per SNP than BayesA. Method fastBayesA can be used as a computationally efficient substitute for BayesA, especially when an increasing number of markers bring unreasonable computational burden or slow convergence to MCMC approaches.

  19. Fast imaging measurements and modeling of neutral and impurity density on C-2U

    Science.gov (United States)

    Granstedt, Erik; Deng, B.; Dettrick, S.; Gupta, D. K.; Osin, D.; Roche, T.; Zhai, K.; TAE Team

    2016-10-01

    The C-2U device employed neutral beam injection and end-biasing to sustain an advanced beam-driven Field-Reversed Configuration plasma for 5+ ms, beyond characteristic transport time-scales. Three high-speed, filtered cameras observed visible light emission from neutral hydrogen and impurities, as well as deuterium pellet ablation and compact-toroid injection which were used for auxiliary particle fueling. Careful vacuum practices and titanium gettering successfully reduced neutral recycling from the confinement vessel wall. As a result, a large fraction of the remaining neutrals originate from charge-exchange between the neutral beams and plasma ions. Measured H/D- α emission is used with DEGAS2 neutral particle modeling to reconstruct the strongly non-axissymmetric neutral distribution. This is then used in fast-ion modeling to more accurately estimate their charge-exchange loss rate. Oxygen emission due to electron-impact excitation and charge-exchange recombination has also been measured using fast imaging. Reconstructed emissivity of O4+ is localized on the outboard side of the core plasma near the estimated location of the separatrix inferred by external magnetic measurements. Tri Alpha Energy.

  20. Model-Based Load Estimation for Predictive Condition Monitoring of Wind Turbines

    DEFF Research Database (Denmark)

    Perisic, Nevena; Pederen, Bo Juul; Grunnet, Jacob Deleuran

    signal is performed online, and a Load Indicator Signal (LIS) is formulated as a ratio between current estimated accumulated fatigue loads and its expected value based only on a priori knowledge (WTG dynamics and wind climate). LOT initialisation is based on a priori knowledge and can be obtained using...... programme for pre-maintenance actions. The performance of LOT is demonstrated by applying it to one of the most critical WTG components, the gearbox. Model-based load CMS for gearbox requires only standard WTG SCADA data. Direct measuring of gearbox fatigue loads requires high cost and low reliability...... measurement equipment. Thus, LOT can significantly reduce the price of load monitoring....

  1. Chloramine demand estimation using surrogate chemical and microbiological parameters.

    Science.gov (United States)

    Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose

    2017-07-01

    A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.

  2. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  3. Analytical dynamic modeling of fast trilayer polypyrrole bending actuators

    International Nuclear Information System (INIS)

    Amiri Moghadam, Amir Ali; Moavenian, Majid; Tahani, Masoud; Torabi, Keivan

    2011-01-01

    Analytical modeling of conjugated polymer actuators with complicated electro-chemo-mechanical dynamics is an interesting area for research, due to the wide range of applications including biomimetic robots and biomedical devices. Although there have been extensive reports on modeling the electrochemical dynamics of polypyrrole (PPy) bending actuators, mechanical dynamics modeling of the actuators remains unexplored. PPy actuators can operate with low voltage while producing large displacement in comparison to robotic joints, they do not have friction or backlash, but they suffer from some disadvantages such as creep and hysteresis. In this paper, a complete analytical dynamic model for fast trilayer polypyrrole bending actuators has been proposed and named the analytical multi-domain dynamic actuator (AMDDA) model. First an electrical admittance model of the actuator will be obtained based on a distributed RC line; subsequently a proper mechanical dynamic model will be derived, based on Hamilton's principle. The purposed modeling approach will be validated based on recently published experimental results

  4. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    Science.gov (United States)

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  5. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    Science.gov (United States)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API

  6. Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model

    OpenAIRE

    Shulgin, A.

    2015-01-01

    Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...

  7. A FAST SEGMENTATION ALGORITHM FOR C-V MODEL BASED ON EXPONENTIAL IMAGE SEQUENCE GENERATION

    Directory of Open Access Journals (Sweden)

    J. Hu

    2017-09-01

    Full Text Available For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1 the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2 the initial value of SDF (Signal Distance Function and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3 the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  8. Estimation of pathological tremor from recorded signals based on adaptive sliding fast Fourier transform

    Directory of Open Access Journals (Sweden)

    Shengxin Wang

    2016-06-01

    Full Text Available Pathological tremor is an approximately rhythmic movement and considerably affects patients’ daily living activities. Biomechanical loading and functional electrical stimulation are proposed as potential alternatives for canceling the pathological tremor. However, the performance of suppression methods is associated with the separation of tremor from the recorded signals. In this literature, an algorithm incorporating a fast Fourier transform augmented with a sliding convolution window, an interpolation procedure, and a damping module of the frequency is presented to isolate tremulous components from the measured signals and estimate the instantaneous tremor frequency. Meanwhile, a mechanism platform is designed to provide the simulation tremor signals with different degrees of voluntary movements. The performance of the proposed algorithm and existing procedures is compared with simulated signals and experimental signals collected from patients. The results demonstrate that the proposed solution could detect the unknown dominant frequency and distinguish the tremor components with higher accuracy. Therefore, this algorithm is useful for actively compensating tremor by functional electrical stimulation without affecting the voluntary movement.

  9. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    2017-02-01

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.

  10. Fast emission estimates in China and South Africa constrained by satellite observations

    Science.gov (United States)

    Mijling, Bas; van der A, Ronald

    2013-04-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for emerging economies such as China and South Africa, where rapid economic growth change emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. However, constraining emissions from observations of concentrations is computationally challenging. Within the GlobEmission project (part of the Data User Element programme of ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China and South Africa, using the CHIMERE chemical transport model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e

  11. Fast optimization of statistical potentials for structurally constrained phylogenetic models

    Directory of Open Access Journals (Sweden)

    Rodrigue Nicolas

    2009-09-01

    Full Text Available Abstract Background Statistical approaches for protein design are relevant in the field of molecular evolutionary studies. In recent years, new, so-called structurally constrained (SC models of protein-coding sequence evolution have been proposed, which use statistical potentials to assess sequence-structure compatibility. In a previous work, we defined a statistical framework for optimizing knowledge-based potentials especially suited to SC models. Our method used the maximum likelihood principle and provided what we call the joint potentials. However, the method required numerical estimations by the use of computationally heavy Markov Chain Monte Carlo sampling algorithms. Results Here, we develop an alternative optimization procedure, based on a leave-one-out argument coupled to fast gradient descent algorithms. We assess that the leave-one-out potential yields very similar results to the joint approach developed previously, both in terms of the resulting potential parameters, and by Bayes factor evaluation in a phylogenetic context. On the other hand, the leave-one-out approach results in a considerable computational benefit (up to a 1,000 fold decrease in computational time for the optimization procedure. Conclusion Due to its computational speed, the optimization method we propose offers an attractive alternative for the design and empirical evaluation of alternative forms of potentials, using large data sets and high-dimensional parameterizations.

  12. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  13. Kalman filtering state of charge estimation for battery management system based on a stochastic fuzzy neural network battery model

    International Nuclear Information System (INIS)

    Xu Long; Wang Junping; Chen Quanshi

    2012-01-01

    Highlights: ► A novel extended Kalman Filtering SOC estimation method based on a stochastic fuzzy neural network (SFNN) battery model is proposed. ► The SFNN which has filtering effect on noisy input can model the battery nonlinear dynamic with high accuracy. ► A robust parameter learning algorithm for SFNN is studied so that the parameters can converge to its true value with noisy data. ► The maximum SOC estimation error based on the proposed method is 0.6%. - Abstract: Extended Kalman filtering is an intelligent and optimal means for estimating the state of a dynamic system. In order to use extended Kalman filtering to estimate the state of charge (SOC), we require a mathematical model that can accurately capture the dynamics of battery pack. In this paper, we propose a stochastic fuzzy neural network (SFNN) instead of the traditional neural network that has filtering effect on noisy input to model the battery nonlinear dynamic. Then, the paper studies the extended Kalman filtering SOC estimation method based on a SFNN model. The modeling test is realized on an 80 Ah Ni/MH battery pack and the Federal Urban Driving Schedule (FUDS) cycle is used to verify the SOC estimation method. The maximum SOC estimation error is 0.6% compared with the real SOC obtained from the discharging test.

  14. Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling

    Science.gov (United States)

    Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.

    2015-01-01

    Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless

  15. Estimation of the radial force on the tokamak vessel wall during fast transient events

    Energy Technology Data Exchange (ETDEWEB)

    Pustovitov, V. D., E-mail: pustovitov-vd@nrcki.ru [National Research Center Kurchatov Institute (Russian Federation)

    2016-11-15

    The radial force balance in a tokamak during fast transient events with a duration much shorter than the resistive time of the vacuum vessel wall is analyzed. The aim of the work is to analytically estimate the resulting integral radial force on the wall. In contrast to the preceding study [Plasma Phys. Rep. 41, 952 (2015)], where a similar problem was considered for thermal quench, simultaneous changes in the profiles and values of the pressure and plasma current are allowed here. Thereby, the current quench and various methods of disruption mitigation used in the existing tokamaks and considered for future applications are also covered. General formulas for the force at an arbitrary sequence or combination of events are derived, and estimates for the standard tokamak model are made. The earlier results and conclusions are confirmed, and it is shown that, in the disruption mitigation scenarios accepted for ITER, the radial forces can be as high as in uncontrolled disruptions.

  16. Model instruments of effective segmentation of the fast food market

    Directory of Open Access Journals (Sweden)

    Mityaeva Tetyana L.

    2013-03-01

    Full Text Available The article presents results of optimisation step-type calculations of economic effectiveness of promotion of fast food with consideration of key parameters of assessment of efficiency of the marketing strategy of segmentation. The article justifies development of a mathematical model on the bases of 3D-presentations and three-dimensional system of management variables. The modern applied mathematical packages allow formation not only of one-dimensional and two-dimensional arrays and analyse links of variables, but also of three-dimensional, besides, the more links and parameters are taken into account, the more adequate and adaptive are results of modelling and, as a result, more informative and strategically valuable. The article shows modelling possibilities that allow taking into account strategies and reactions on formation of the marketing strategy under conditions of entering the fast food market segments.

  17. DOA Estimation Based on Real-Valued Cross Correlation Matrix of Coprime Arrays

    Directory of Open Access Journals (Sweden)

    Jianfeng Li

    2017-03-01

    Full Text Available A fast direction of arrival (DOA estimation method using a real-valued cross-correlation matrix (CCM of coprime subarrays is proposed. Firstly, real-valued CCM with extended aperture is constructed to obtain the signal subspaces corresponding to the two subarrays. By analysing the relationship between the two subspaces, DOA estimations from the two subarrays are simultaneously obtained with automatic pairing. Finally, unique DOA is determined based on the common results from the two subarrays. Compared to partial spectral search (PSS method and estimation of signal parameter via rotational invariance (ESPRIT based method for coprime arrays, the proposed algorithm has lower complexity but achieves better DOA estimation performance and handles more sources. Simulation results verify the effectiveness of the approach.

  18. Assessing the external validity of model-based estimates of the incidence of heart attack in England: a modelling study

    Directory of Open Access Journals (Sweden)

    Peter Scarborough

    2016-11-01

    Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.

  19. Tyre-road friction coefficient estimation based on tyre sensors and lateral tyre deflection: modelling, simulations and experiments

    Science.gov (United States)

    Hong, Sanghyun; Erdogan, Gurkan; Hedrick, Karl; Borrelli, Francesco

    2013-05-01

    The estimation of the tyre-road friction coefficient is fundamental for vehicle control systems. Tyre sensors enable the friction coefficient estimation based on signals extracted directly from tyres. This paper presents a tyre-road friction coefficient estimation algorithm based on tyre lateral deflection obtained from lateral acceleration. The lateral acceleration is measured by wireless three-dimensional accelerometers embedded inside the tyres. The proposed algorithm first determines the contact patch using a radial acceleration profile. Then, the portion of the lateral acceleration profile, only inside the tyre-road contact patch, is used to estimate the friction coefficient through a tyre brush model and a simple tyre model. The proposed strategy accounts for orientation-variation of accelerometer body frame during tyre rotation. The effectiveness and performance of the algorithm are demonstrated through finite element model simulations and experimental tests with small tyre slip angles on different road surface conditions.

  20. Development of guidelines for inelastic analysis in design of fast reactor components

    International Nuclear Information System (INIS)

    Nakamura, Kyotada; Kasahara, Naoto; Morishita, Masaki; Shibamoto, Hiroshi; Inoue, Kazuhiko; Nakayama, Yasunari

    2008-01-01

    The interim guidelines for the application of inelastic analysis to design of fast reactor components were developed. These guidelines are referred from 'Elevated Temperature Structural Design Guide for Commercialized Fast Reactor (FDS)'. The basic policies of the guidelines are more rational predictions compared with elastic analysis approach and a guarantee of conservative results for design conditions. The guidelines recommend two kinds of constitutive equations to estimate strains conservatively. They also provide the methods for modeling load histories and estimating fatigue and creep damage based on the results of inelastic analysis. The guidelines were applied to typical design examples and their results were summarized as exemplars to support users

  1. An automated multi-model based evapotranspiration estimation framework for understanding crop-climate interactions in India

    Science.gov (United States)

    Bhattarai, N.; Jain, M.; Mallick, K.

    2017-12-01

    A remote sensing based multi-model evapotranspiration (ET) estimation framework is developed using MODIS and NASA Merra-2 reanalysis data for data poor regions, and we apply this framework to the Indian subcontinent. The framework eliminates the need for in-situ calibration data and hence estimates ET completely from space and is replicable across all regions in the world. Currently, six surface energy balance models ranging from widely-used SEBAL, METRIC, and SEBS to moderately-used S-SEBI, SSEBop, and a relatively new model, STIC1.2 are being integrated and validated. Preliminary analysis suggests good predictability of the models for estimating near- real time ET under clear sky conditions from various crop types in India with coefficient of determination 0.32-0.55 and percent bias -15%-28%, when compared against Bowen Ratio based ET estimates. The results are particularly encouraging given that no direct ground input data were used in the analysis. The framework is currently being extended to estimate seasonal ET across the Indian subcontinent using a model-ensemble approach that uses all available MODIS 8-day datasets since 2000. These ET products are being used to monitor inter-seasonal and inter-annual dynamics of ET and crop water use across different crop and irrigation practices in India. Particularly, the potential impacts of changes in precipitation patterns and extreme heat (e.g., extreme degree days) on seasonal crop water consumption is being studied. Our ET products are able to locate the water stress hotspots that need to be targeted with water saving interventions to maintain agricultural production in the face of climate variability and change.

  2. Fault Severity Estimation of Rotating Machinery Based on Residual Signals

    Directory of Open Access Journals (Sweden)

    Fan Jiang

    2012-01-01

    Full Text Available Fault severity estimation is an important part of a condition-based maintenance system, which can monitor the performance of an operation machine and enhance its level of safety. In this paper, a novel method based on statistical property and residual signals is developed for estimating the fault severity of rotating machinery. The fast Fourier transformation (FFT is applied to extract the so-called multifrequency-band energy (MFBE from the vibration signals of rotating machinery with different fault severity levels in the first stage. Usually these features of the working conditions with different fault sensitivities are different. Therefore a sensitive features-selecting algorithm is defined to construct the feature matrix and calculate the statistic parameter (mean in the second stage. In the last stage, the residual signals computed by the zero space vector are used to estimate the fault severity. Simulation and experimental results reveal that the proposed method based on statistics and residual signals is effective and feasible for estimating the severity of a rotating machine fault.

  3. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    Science.gov (United States)

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  4. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    Science.gov (United States)

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  5. Acoustic monitoring of sodium boiling in a liquid metal fast breeder reactor from autoregressive models

    International Nuclear Information System (INIS)

    Geraldo, Issa Cherif; Bose, Tanmoy; Pekpe, Komi Midzodzi; Cassar, Jean-Philippe; Mohanty, A.R.; Paumel, Kévin

    2014-01-01

    Highlights: • The work deals with sodium boiling detection in a liquid metal fast breeder reactor. • The authors choose to use acoustic data instead of thermal data. • The method is designed to not to be disturbed by the environment noises. • A real time boiling detection methods are proposed in the paper. - Abstract: This paper deals with acoustic monitoring of sodium boiling in a liquid metal fast breeder reactor (LMFBR) based on auto regressive (AR) models which have low computational complexities. Some authors have used AR models for sodium boiling or sodium–water reaction detection. These works are based on the characterization of the difference between fault free condition and current functioning of the system. However, even in absence of faults, it is possible to observe a change in the AR models due to the change of operating mode of the LMFBR. This sets up the delicate problem of how to distinguish a change in operating mode in absence of faults and a change due to presence of faults. In this paper we propose a new approach for boiling detection based on the estimation of AR models on sliding windows. Afterwards, classification of the models into boiling or non-boiling models is made by comparing their coefficients by two statistical methods, multiple linear regression (LR) and support vectors machines (SVM). The proposed approach takes into account operating mode information in order to avoid false alarms. Experimental data include non-boiling background noise data collected from Phenix power plant (France) and provided by the CEA (Commissariat à l’Energie Atomique et aux énergies alternatives, France) and boiling condition data generated in laboratory. High boiling detection rates as well as low false alarms rates obtained on these experimental data show that the proposed method is efficient for boiling detection. Most importantly, it shows that the boiling phenomenon introduces a disturbance into the AR models that can be clearly detected

  6. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    Science.gov (United States)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  7. Proposed method of the modeling and simulation of corrosion product behavior in the primary cooling system of fast breeder reactors

    International Nuclear Information System (INIS)

    Matuo, Youichirou; Miyahara, Shinya; Izumi, Yoshinobu

    2011-01-01

    Radioactive corrosion products (CP) are main cause of personal radiation exposure during maintenance without fuel failure in FBR plants. In order to establish the techniques of radiation dose estimation for worker in radiation-controlled area, Program SYstem for Corrosion Hazard Evaluation code 'PSYCHE' has been developed. The PSYCHE is based on the Solution-Precipitation model. The CP transfer calculation using the Solution-Precipitation model needs a fitting factor for the calculation of the precipitation of CP. This fitting factor must be determined based on the measured values in reactors that have operating experience. For this reason, the inability to make accurate predictions for reactor without measured values is a major issue. In this study, in addition to existing Solution-Precipitation model in PSYCHE, a transfer-model of CP species in particle form was applied to calculations of CP behavior in the primary cooling system of fast breeder reactor MONJU. Based on the calculated results, we estimated the contribution of CP deposition in the particle-form. It was suggested that the improved model including transfer-model of CP species in particle-form could be used for evaluation of CP transfer and radiation-source distribution in place of conventional Solution-Precipitation model with fitting factor in the PSYCHE. Moreover, it was predicted that CP particles would tend to be deposited in region with high-flow rate of coolant. (author)

  8. Determining the Uncertainties in Prescribed Burn Emissions Through Comparison of Satellite Estimates to Ground-based Estimates and Air Quality Model Evaluations in Southeastern US

    Science.gov (United States)

    Odman, M. T.; Hu, Y.; Russell, A. G.

    2016-12-01

    Prescribed burning is practiced throughout the US, and most widely in the Southeast, for the purpose of maintaining and improving the ecosystem, and reducing the wildfire risk. However, prescribed burn emissions contribute significantly to the of trace gas and particulate matter loads in the atmosphere. In places where air quality is already stressed by other anthropogenic emissions, prescribed burns can lead to major health and environmental problems. Air quality modeling efforts are under way to assess the impacts of prescribed burn emissions. Operational forecasts of the impacts are also emerging for use in dynamic management of air quality as well as the burns. Unfortunately, large uncertainties exist in the process of estimating prescribed burn emissions and these uncertainties limit the accuracy of the burn impact predictions. Prescribed burn emissions are estimated by using either ground-based information or satellite observations. When there is sufficient local information about the burn area, the types of fuels, their consumption amounts, and the progression of the fire, ground-based estimates are more accurate. In the absence of such information satellites remain as the only reliable source for emission estimation. To determine the level of uncertainty in prescribed burn emissions, we compared estimates derived from a burn permit database and other ground-based information to the estimates by the Biomass Burning Emissions Product derived from a constellation of NOAA and NASA satellites. Using these emissions estimates we conducted simulations with the Community Multiscale Air Quality (CMAQ) model and predicted trace gas and particulate matter concentrations throughout the Southeast for two consecutive burn seasons (2015 and 2016). In this presentation, we will compare model predicted concentrations to measurements at monitoring stations and evaluate if the differences are commensurate with our emission uncertainty estimates. We will also investigate if

  9. Simulation of anthropogenic CO2 uptake in the CCSM3.1 ocean circulation-biogeochemical model: comparison with data-based estimates

    Directory of Open Access Journals (Sweden)

    S. Khatiwala

    2012-04-01

    Full Text Available The global ocean has taken up a large fraction of the CO2 released by human activities since the industrial revolution. Quantifying the oceanic anthropogenic carbon (Cant inventory and its variability is important for predicting the future global carbon cycle. The detailed comparison of data-based and model-based estimates is essential for the validation and continued improvement of our prediction capabilities. So far, three global estimates of oceanic Cant inventory that are "data-based" and independent of global ocean circulation models have been produced: one based on the Δ C* method, and two that are based on constraining surface-to-interior transport of tracers, the TTD method and a maximum entropy inversion method (GF. The GF method, in particular, is capable of reconstructing the history of Cant inventory through the industrial era. In the present study we use forward model simulations of the Community Climate System Model (CCSM3.1 to estimate the Cant inventory and compare the results with the data-based estimates. We also use the simulations to test several assumptions of the GF method, including the assumption of constant climate and circulation, which is common to all the data-based estimates. Though the integrated estimates of global Cant inventories are consistent with each other, the regional estimates show discrepancies up to 50 %. The CCSM3 model underestimates the total Cant inventory, in part due to weak mixing and ventilation in the North Atlantic and Southern Ocean. Analyses of different simulation results suggest that key assumptions about ocean circulation and air-sea disequilibrium in the GF method are generally valid on the global scale, but may introduce errors in Cant estimates on regional scales. The GF method should also be used with caution when predicting future oceanic anthropogenic carbon uptake.

  10. A fast and systematic procedure to develop dynamic models of bioprocesses: application to microalgae cultures

    Directory of Open Access Journals (Sweden)

    J. Mailier

    2010-09-01

    Full Text Available The purpose of this paper is to report on the development of a procedure for inferring black-box, yet biologically interpretable, dynamic models of bioprocesses based on sets of measurements of a few external components (biomass, substrates, and products of interest. The procedure has three main steps: (a the determination of the number of macroscopic biological reactions linking the measured components; (b the estimation of a first reaction scheme, which has interesting mathematical properties, but might lack a biological interpretation; and (c the "projection" (or transformation of this reaction scheme onto a biologically-consistent scheme. The advantage of the method is that it allows the fast prototyping of models for the culture of microorganisms that are not well documented. The good performance of the third step of the method is demonstrated by application to an example of microalgal culture.

  11. Empirical model for mean temperature for Indian zone and estimation of precipitable water vapor from ground based GPS measurements

    Directory of Open Access Journals (Sweden)

    C. Suresh Raju

    2007-10-01

    Full Text Available Estimation of precipitable water (PW in the atmosphere from ground-based Global Positioning System (GPS essentially involves modeling the zenith hydrostatic delay (ZHD in terms of surface Pressure (Ps and subtracting it from the corresponding values of zenith tropospheric delay (ZTD to estimate the zenith wet (non-hydrostatic delay (ZWD. This further involves establishing an appropriate model connecting PW and ZWD, which in its simplest case assumed to be similar to that of ZHD. But when the temperature variations are large, for the accurate estimate of PW the variation of the proportionality constant connecting PW and ZWD is to be accounted. For this a water vapor weighted mean temperature (Tm has been defined by many investigations, which has to be modeled on a regional basis. For estimating PW over the Indian region from GPS data, a region specific model for Tm in terms of surface temperature (Ts is developed using the radiosonde measurements from eight India Meteorological Department (IMD stations spread over the sub-continent within a latitude range of 8.5°–32.6° N. Following a similar procedure Tm-based models are also evolved for each of these stations and the features of these site-specific models are compared with those of the region-specific model. Applicability of the region-specific and site-specific Tm-based models in retrieving PW from GPS data recorded at the IGS sites Bangalore and Hyderabad, is tested by comparing the retrieved values of PW with those estimated from the altitude profile of water vapor measured using radiosonde. The values of ZWD estimated at 00:00 UTC and 12:00 UTC are used to test the validity of the models by estimating the PW using the models and comparing it with those obtained from radiosonde data. The region specific Tm-based model is found to be in par with if not better than a

  12. Fast clustering using adaptive density peak detection.

    Science.gov (United States)

    Wang, Xiao-Feng; Xu, Yifan

    2017-12-01

    Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.

  13. A Quasiphysics Intelligent Model for a Long Range Fast Tool Servo

    Science.gov (United States)

    Liu, Qiang; Zhou, Xiaoqin; Lin, Jieqiong; Xu, Pengzi; Zhu, Zhiwei

    2013-01-01

    Accurately modeling the dynamic behaviors of fast tool servo (FTS) is one of the key issues in the ultraprecision positioning of the cutting tool. Herein, a quasiphysics intelligent model (QPIM) integrating a linear physics model (LPM) and a radial basis function (RBF) based neural model (NM) is developed to accurately describe the dynamic behaviors of a voice coil motor (VCM) actuated long range fast tool servo (LFTS). To identify the parameters of the LPM, a novel Opposition-based Self-adaptive Replacement Differential Evolution (OSaRDE) algorithm is proposed which has been proved to have a faster convergence mechanism without compromising with the quality of solution and outperform than similar evolution algorithms taken for consideration. The modeling errors of the LPM and the QPIM are investigated by experiments. The modeling error of the LPM presents an obvious trend component which is about ±1.15% of the full span range verifying the efficiency of the proposed OSaRDE algorithm for system identification. As for the QPIM, the trend component in the residual error of LPM can be well suppressed, and the error of the QPIM maintains noise level. All the results verify the efficiency and superiority of the proposed modeling and identification approaches. PMID:24163627

  14. Solar radiation estimation based on the insolation

    International Nuclear Information System (INIS)

    Assis, F.N. de; Steinmetz, S.; Martins, S.R.; Mendez, M.E.G.

    1998-01-01

    A series of daily global solar radiation data measured by an Eppley pyranometer was used to test PEREIRA and VILLA NOVA’s (1997) model to estimate the potential of radiation based on the instantaneous values measured at solar noon. The model also allows to estimate the parameters of PRESCOTT’s equation (1940) assuming a = 0,29 cosj. The results demonstrated the model’s validity for the studied conditions. Simultaneously, the hypothesis of generalizing the use of the radiation estimative formulas based on insolation, and using K = Ko (0,29 cosj + 0,50 n/N), was analysed and confirmed [pt

  15. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    Science.gov (United States)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  16. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  17. Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations.

    Science.gov (United States)

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn

    2015-05-15

    The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Estimating cardiovascular disease incidence from prevalence: a spreadsheet based model

    Directory of Open Access Journals (Sweden)

    Xue Feng Hu

    2017-01-01

    Full Text Available Abstract Background Disease incidence and prevalence are both core indicators of population health. Incidence is generally not as readily accessible as prevalence. Cohort studies and electronic health record systems are two major way to estimate disease incidence. The former is time-consuming and expensive; the latter is not available in most developing countries. Alternatively, mathematical models could be used to estimate disease incidence from prevalence. Methods We proposed and validated a method to estimate the age-standardized incidence of cardiovascular disease (CVD, with prevalence data from successive surveys and mortality data from empirical studies. Hallett’s method designed for estimating HIV infections in Africa was modified to estimate the incidence of myocardial infarction (MI in the U.S. population and incidence of heart disease in the Canadian population. Results Model-derived estimates were in close agreement with observed incidence from cohort studies and population surveillance systems. This method correctly captured the trend in incidence given sufficient waves of cross-sectional surveys. The estimated MI declining rate in the U.S. population was in accordance with the literature. This method was superior to closed cohort, in terms of the estimating trend of population cardiovascular disease incidence. Conclusion It is possible to estimate CVD incidence accurately at the population level from cross-sectional prevalence data. This method has the potential to be used for age- and sex- specific incidence estimates, or to be expanded to other chronic conditions.

  19. An improved principal component analysis based region matching method for fringe direction estimation

    Science.gov (United States)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  20. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    Science.gov (United States)

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2017-03-01

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  2. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  3. Characteristics of a stable arc based on FAST and MIRACLE observations

    Directory of Open Access Journals (Sweden)

    P. Janhunen

    2000-02-01

    Full Text Available A stable evening sector arc is studied using observations from the FAST satellite at 1250 km altitude and the MIRACLE ground-based network, which contains all-sky cameras, coherent radars (STARE, and magnetometers. Both FAST and STARE observe a northward electric field region of about 200 km width and a field magnitude of about 50 mV/m southward of the arc, which is a typical signature for an evening-sector arc. The field-aligned current determined from FAST electron and magnetometer data are in rather good agreement within the arcs. Outside the arcs, the electron data misses the current carriers of the downward FAC probably because it is mainly carried by electrons of smaller energy than the instrument threshold. Studying the westward propagation speed of small undulations associated with the arc using the all-sky cameras gives a velocity of about 2 km/s. This speed is higher than the background ionospheric plasma speed (about 1 km/s, but it agrees rather well with the idea originally proposed by Davis that the undulations reflect an E × B motion in the acceleration region. The ground magnetograms indicate that the main current flows slightly south of the arc. Computing the ionospheric conductivity from FAST electron data and using the ground magnetograms to estimate the current yields an ionospheric electric field pattern, in rather good agreement with FAST results.Key words: Ionosphere (auroral ionosphere; ionosphere-magnetosphere interactions - Magnetospheric physics (auroral phenomena

  4. Sliding-MOMP Based Channel Estimation Scheme for ISDB-T Systems

    Directory of Open Access Journals (Sweden)

    Ziji Ma

    2016-01-01

    Full Text Available Compressive sensing based channel estimation has shown its advantage of accurate reconstruction for sparse signal with less pilots for OFDM systems. However, high computational cost requirement of CS method, due to linear programming, significantly restricts its implementation in practical applications. In this paper, we propose a reduced complexity channel estimation scheme of modified orthogonal matching pursuit with sliding windows for ISDB-T (Integrated Services Digital Broadcasting for Terrestrial system. The proposed scheme can reduce the computational cost by limiting the searching region as well as making effective use of the last estimation result. In addition, adaptive tracking strategy with sliding sampling window can improve the robustness of CS based methods to guarantee its accuracy of channel matrix reconstruction, even for fast time-variant channels. The computer simulation demonstrates its impact on improving bit error rate and computational complexity for ISDB-T system.

  5. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

    2009-08-15

    The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

  6. A fast and flexible reactor physics model for simulating neutron spectra and depletion in fast reactors - 202

    International Nuclear Information System (INIS)

    Recktenwald, G.D.; Bronk, L.A.; Deinert, M.R.

    2010-01-01

    Determining the time dependent concentration of isotopes within a nuclear reactor core is central to the analysis of nuclear fuel cycles. We present a fast, flexible tool for determining the time dependent neutron spectrum within fast reactors. The code (VBUDS: visualization, burnup, depletion and spectra) uses a two region, multigroup collision probability model to simulate the energy dependent neutron flux and tracks the buildup and burnout of 24 actinides, as well as fission products. While originally developed for LWR simulations, the model is shown to produce fast reactor spectra that show high degree of fidelity to available fast reactor benchmarks. (authors)

  7. Profiling Fast Healthcare Interoperability Resources (FHIR) of Family Health History based on the Clinical Element Models.

    Science.gov (United States)

    Lee, Jaehoon; Hulse, Nathan C; Wood, Grant M; Oniki, Thomas A; Huff, Stanley M

    2016-01-01

    In this study we developed a Fast Healthcare Interoperability Resources (FHIR) profile to support exchanging a full pedigree based family health history (FHH) information across multiple systems and applications used by clinicians, patients, and researchers. We used previously developed clinical element models (CEMs) that are capable of representing the FHH information, and derived essential data elements including attributes, constraints, and value sets. We analyzed gaps between the FHH CEM elements and existing FHIR resources. Based on the analysis, we developed a profile that consists of 1) FHIR resources for essential FHH data elements, 2) extensions for additional elements that were not covered by the resources, and 3) a structured definition to integrate patient and family member information in a FHIR message. We implemented the profile using an open-source based FHIR framework and validated it using patient-entered FHH data that was captured through a locally developed FHH tool.

  8. Fast mutual-information-based contrast enhancement

    Science.gov (United States)

    Cao, Gang; Yu, Lifang; Tian, Huawei; Huang, Xianglin; Wang, Yongbin

    2017-07-01

    Recently, T. Celik proposed an effective image contrast enhancement (CE) method based on spatial mutual information and PageRank (SMIRANK). According to the state-of-the-art evaluation criteria, it achieves the best visual enhancement quality among existing global CE methods. However, SMIRANK runs much slower than the other counterparts, such as histogram equalization (HE) and adaptive gamma correction. Low computational complexity is also required for good CE algorithms. In this paper, we novelly propose a fast SMIRANK algorithm, called FastSMIRANK. It integrates both spatial and gray-level downsampling into the generation of pixel value mapping function. Moreover, the computation of rank vectors is speeded up by replacing PageRank with a simple yet efficient row-based operation of mutual information matrix. Extensive experimental results show that the proposed FastSMIRANK could accelerate the processing speed of SMIRANK by about 20 times, and is even faster than HE. Comparable enhancement quality is preserved simultaneously.

  9. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki; Park, Kihong; Alouini, Mohamed-Slim

    2017-01-01

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  10. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki

    2017-07-28

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  11. Neighborhood fast food restaurants and fast food consumption: a national study.

    Science.gov (United States)

    Richardson, Andrea S; Boone-Heinonen, Janne; Popkin, Barry M; Gordon-Larsen, Penny

    2011-07-08

    Recent studies suggest that neighborhood fast food restaurant availability is related to greater obesity, yet few studies have investigated whether neighborhood fast food restaurant availability promotes fast food consumption. Our aim was to estimate the effect of neighborhood fast food availability on frequency of fast food consumption in a national sample of young adults, a population at high risk for obesity. We used national data from U.S. young adults enrolled in wave III (2001-02; ages 18-28) of the National Longitudinal Study of Adolescent Health (n = 13,150). Urbanicity-stratified multivariate negative binomial regression models were used to examine cross-sectional associations between neighborhood fast food availability and individual-level self-reported fast food consumption frequency, controlling for individual and neighborhood characteristics. In adjusted analysis, fast food availability was not associated with weekly frequency of fast food consumption in non-urban or low- or high-density urban areas. Policies aiming to reduce neighborhood availability as a means to reduce fast food consumption among young adults may be unsuccessful. Consideration of fast food outlets near school or workplace locations, factors specific to more or less urban settings, and the role of individual lifestyle attitudes and preferences are needed in future research.

  12. Neighborhood fast food restaurants and fast food consumption: A national study

    Directory of Open Access Journals (Sweden)

    Gordon-Larsen Penny

    2011-07-01

    Full Text Available Abstract Background Recent studies suggest that neighborhood fast food restaurant availability is related to greater obesity, yet few studies have investigated whether neighborhood fast food restaurant availability promotes fast food consumption. Our aim was to estimate the effect of neighborhood fast food availability on frequency of fast food consumption in a national sample of young adults, a population at high risk for obesity. Methods We used national data from U.S. young adults enrolled in wave III (2001-02; ages 18-28 of the National Longitudinal Study of Adolescent Health (n = 13,150. Urbanicity-stratified multivariate negative binomial regression models were used to examine cross-sectional associations between neighborhood fast food availability and individual-level self-reported fast food consumption frequency, controlling for individual and neighborhood characteristics. Results In adjusted analysis, fast food availability was not associated with weekly frequency of fast food consumption in non-urban or low- or high-density urban areas. Conclusions Policies aiming to reduce neighborhood availability as a means to reduce fast food consumption among young adults may be unsuccessful. Consideration of fast food outlets near school or workplace locations, factors specific to more or less urban settings, and the role of individual lifestyle attitudes and preferences are needed in future research.

  13. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  14. Estimating methane emissions from landfills based on rainfall, ambient temperature, and waste composition: The CLEEN model.

    Science.gov (United States)

    Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria

    2015-12-01

    Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Fast Biological Modeling for Voxel-based Heavy Ion Treatment Planning Using the Mechanistic Repair-Misrepair-Fixation Model and Nuclear Fragment Spectra

    Energy Technology Data Exchange (ETDEWEB)

    Kamp, Florian [Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut (United States); Department of Radiation Oncology, Technische Universität München, Klinikum Rechts der Isar, München (Germany); Physik-Department, Technische Universität München, Garching (Germany); Cabal, Gonzalo [Experimental Physics–Medical Physics, Ludwig Maximilians University Munich, Garching (Germany); Mairani, Andrea [Medical Physics Unit, Centro Nazionale Adroterapia Oncologica (CNAO), Pavia (Italy); Heidelberg Ion-Beam Therapy Center, Heidelberg (Germany); Parodi, Katia [Experimental Physics–Medical Physics, Ludwig Maximilians University Munich, Garching (Germany); Wilkens, Jan J. [Department of Radiation Oncology, Technische Universität München, Klinikum Rechts der Isar, München (Germany); Physik-Department, Technische Universität München, Garching (Germany); Carlson, David J., E-mail: david.j.carlson@yale.edu [Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut (United States)

    2015-11-01

    Purpose: The physical and biological differences between heavy ions and photons have not been fully exploited and could improve treatment outcomes. In carbon ion therapy, treatment planning must account for physical properties, such as the absorbed dose and nuclear fragmentation, and for differences in the relative biological effectiveness (RBE) of ions compared with photons. We combined the mechanistic repair-misrepair-fixation (RMF) model with Monte Carlo-generated fragmentation spectra for biological optimization of carbon ion treatment plans. Methods and Materials: Relative changes in double-strand break yields and radiosensitivity parameters with particle type and energy were determined using the independently benchmarked Monte Carlo damage simulation and the RMF model to estimate the RBE values for primary carbon ions and secondary fragments. Depth-dependent energy spectra were generated with the Monte Carlo code FLUKA for clinically relevant initial carbon ion energies. The predicted trends in RBE were compared with the published experimental data. Biological optimization for carbon ions was implemented in a 3-dimensional research treatment planning tool. Results: We compared the RBE and RBE-weighted dose (RWD) distributions of different carbon ion treatment scenarios with and without nuclear fragments. The inclusion of fragments in the simulations led to smaller RBE predictions. A validation of RMF against measured cell survival data reported in published studies showed reasonable agreement. We calculated and optimized the RWD distributions on patient data and compared the RMF predictions with those from other biological models. The RBE values in an astrocytoma tumor ranged from 2.2 to 4.9 (mean 2.8) for a RWD of 3 Gy(RBE) assuming (α/β){sub X} = 2 Gy. Conclusions: These studies provide new information to quantify and assess uncertainties in the clinically relevant RBE values for carbon ion therapy based on biophysical mechanisms. We present results from

  16. Estimation of fast neutron fluence in steel specimens type Laguna Verde in TRIGA Mark III reactor

    International Nuclear Information System (INIS)

    Galicia A, J.; Francois L, J. L.; Aguilar H, F.

    2015-09-01

    The main purpose of this work is to obtain the fluence of fast neutrons recorded within four specimens of carbon steel, similar to the material having the vessels of the BWR reactors of the nuclear power plant of Laguna Verde when subjected to neutron flux in a experimental facility of the TRIGA Mark III reactor, calculating an irradiation time to age the material so accelerated. For the calculation of the neutron flux in the specimens was used the Monte Carlo code MCNP5. In an initial stage, three sheets of natural molybdenum and molybdenum trioxide (MoO 3 ) were incorporated into a model developed of the TRIGA reactor operating at 1 M Wth, to calculate the resulting activity by setting a certain time of irradiation. The results obtained were compared with experimentally measured activities in these same materials to validate the calculated neutron flux in the model used. Subsequently, the fast neutron flux received by the steel specimens to incorporate them in the experimental facility E-16 of the reactor core model operating at nominal maximum power in steady-state was calculated, already from these calculations the irradiation time required was obtained for values of the neutron flux in the range of 10 18 n/cm 2 , which is estimated for the case of Laguna Verde after 32 years of effective operation at maximum power. (Author)

  17. Estimation of the Diesel Particulate Filter Soot Load Based on an Equivalent Circuit Model

    Directory of Open Access Journals (Sweden)

    Yanting Du

    2018-02-01

    Full Text Available In order to estimate the diesel particulate filter (DPF soot load and improve the accuracy of regeneration timing, a novel method based on an equivalent circuit model is proposed based on the electric-fluid analogy. This proposed method can reduce the impact of the engine transient operation on the soot load, accurately calculate the flow resistance, and improve the estimation accuracy of the soot load. Firstly, the least square method is used to identify the flow resistance based on the World Harmonized Transient Cycle (WHTC test data, and the relationship between flow resistance, exhaust temperature and soot load is established. Secondly, the online estimation of the soot load is achieved by using the dual extended Kalman filter (DEKF. The results show that this method has good convergence and robustness with the maximal absolute error of 0.2 g/L at regeneration timing, which can meet engineering requirements. Additionally, this method can estimate the soot load under engine transient operating conditions and avoids a large number of experimental tests, extensive calibration and the analysis of complex chemical reactions required in traditional methods.

  18. Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior

    Science.gov (United States)

    Yang, Jianfei; Poot, Dirk H. J.; Caan, Matthan W. A.; Su, Tanja; Majoie, Charles B. L. M.; van Vliet, Lucas J.; Vos, Frans M.

    2016-01-01

    Purpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD). This data-acquisition prior is based on the Fisher information matrix and enables the assessment whether two tensors are mandatory to describe the data. The method is compared to Maximum Likelihood Estimation (MLE) of the dual tensor model and to FSL’s ball-and-stick approach. Results Monte Carlo experiments demonstrated that JARD’s volume fractions correlated well with the ground truth for single and crossing fiber configurations. In single fiber configurations JARD automatically reduced the volume fraction of one compartment to (almost) zero. The variance in fractional anisotropy (FA) of the main tensor component was thereby reduced compared to MLE. JARD and MLE gave a comparable outcome in data simulating crossing fibers. On brain data, JARD yielded a smaller spread in FA along the corpus callosum compared to MLE. Tract-based spatial statistics demonstrated a higher sensitivity in detecting age-related white matter atrophy using JARD compared to both MLE and the ball-and-stick approach. Conclusions The proposed framework offers accurate and precise estimation of diffusion properties in single and dual fiber regions. PMID:27760166

  19. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  20. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  1. Electric Vehicle Fast-Charging Station Unified Modeling and Stability Analysis in the dq Frame

    Directory of Open Access Journals (Sweden)

    Xiang Wang

    2018-05-01

    Full Text Available The electric vehicle fast-charging station is an important guarantee for the popularity of electric vehicle. As the fast-charging piles are voltage source converters, stability issues will occur in the grid-connected fast-charging station. Since the dynamic input admittance of the fast-charging pile and the dynamic output impedance play an important role in the interaction system stability, the station and grid interaction system is regarded as load-side and source-side sub-systems to build the dynamic impedance model. The dynamic input admittance in matrix form is derived from the fast-charging pile current control loop considering the influence of the LC filter. Similarly, the dynamic output impedance can be obtained similarly by considering the regional power grid capacity, transformer capacity, and feed line length. On this basis, a modified forbidden region-based stability criterion is used for the fast-charging station stability analysis. The frequency-domain case studies and time-domain simulations are presented next to show the influence of factors from both the power grid side and fast-charging pile side. The simulation results validated the effectiveness of the dq frame impedance model and the stability analysis method.

  2. Model-Based Evolution of a Fast Hybrid Fuzzy Adaptive Controller for a Pneumatic Muscle Actuator

    Directory of Open Access Journals (Sweden)

    Alexander Hošovský

    2012-07-01

    Full Text Available Pneumatic artificial muscle-based robotic systems usually necessitate the use of various nonlinear control techniques in order to improve their performance. Their robustness to parameter variation, which is generally difficult to predict, should also be tested. Here a fast hybrid adaptive control is proposed, where a conventional PD controller is placed into the feedforward branch and a fuzzy controller is placed into the adaptation branch. The fuzzy controller compensates for the actions of the PD controller under conditions of inertia moment variation. The fuzzy controller of Takagi-Sugeno type is evolved through a genetic algorithm using the dynamic model of a pneumatic muscle actuator. The results confirm the capability of the designed system to provide robust performance under the conditions of varying inertia.

  3. A fast circuit analysis program based on microcomputer

    International Nuclear Information System (INIS)

    Hu Guoji

    1988-01-01

    A fast circuit analysis program (FCAP) is introduced. The program may be used to analyse DC operating point, frequency and transient response of fast circuit. The feature is that the model of active element is not specified. Users may choose one of many equivalent circuits. Written in FORTRAN 77, FCAP can be run on IBM PC and its compatible computers. It can be used as an assistant tool of analysis and design for fast circuits

  4. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  5. Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon

    Science.gov (United States)

    Nelson, Ross F.

    2010-01-01

    Ice, Cloud, and land Elevation Satellite (ICESat) / Geosciences Laser Altimeter System (GLAS) waveform data are used to estimate biomass and carbon on a 1.27 X 10(exp 6) square km study area in the Province of Quebec, Canada, below the tree line. The same input datasets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include non-stratified and stratified versions of a multiple linear model where either biomass or (biomass)(exp 0.5) serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial dry biomass estimates of up to 0.35 G, with a range of 4.94 +/- 0.28 Gt to 5.29 +/-0.36 Gt. The differences among model estimates are statistically non-significant, however, and the results demonstrate the degree to which carbon estimates vary strictly as a function of the model used to estimate regional biomass. Results also indicate that GLAS measurements become problematic with respect to height and biomass retrievals in the boreal forest when biomass values fall below 20 t/ha and when GLAS 75th percentile heights fall below 7 m.

  6. Evaluating Fast Maximum Likelihood-Based Phylogenetic Programs Using Empirical Phylogenomic Data Sets

    Science.gov (United States)

    Zhou, Xiaofan; Shen, Xing-Xing; Hittinger, Chris Todd

    2018-01-01

    Abstract The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs’ relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses. PMID:29177474

  7. Reference Evapotranspiration Retrievals from a Mesoscale Model Based Weather Variables for Soil Moisture Deficit Estimation

    Directory of Open Access Journals (Sweden)

    Prashant K. Srivastava

    2017-10-01

    Full Text Available Reference Evapotranspiration (ETo and soil moisture deficit (SMD are vital for understanding the hydrological processes, particularly in the context of sustainable water use efficiency in the globe. Precise estimation of ETo and SMD are required for developing appropriate forecasting systems, in hydrological modeling and also in precision agriculture. In this study, the surface temperature downscaled from Weather Research and Forecasting (WRF model is used to estimate ETo using the boundary conditions that are provided by the European Center for Medium Range Weather Forecast (ECMWF. In order to understand the performance, the Hamon’s method is employed to estimate the ETo using the temperature from meteorological station and WRF derived variables. After estimating the ETo, a range of linear and non-linear models is utilized to retrieve SMD. The performance statistics such as RMSE, %Bias, and Nash Sutcliffe Efficiency (NSE indicates that the exponential model (RMSE = 0.226; %Bias = −0.077; NSE = 0.616 is efficient for SMD estimation by using the Observed ETo in comparison to the other linear and non-linear models (RMSE range = 0.019–0.667; %Bias range = 2.821–6.894; NSE = 0.013–0.419 used in this study. On the other hand, in the scenario where SMD is estimated using WRF downscaled meteorological variables based ETo, the linear model is found promising (RMSE = 0.017; %Bias = 5.280; NSE = 0.448 as compared to the non-linear models (RMSE range = 0.022–0.707; %Bias range = −0.207–−6.088; NSE range = 0.013–0.149. Our findings also suggest that all the models are performing better during the growing season (RMSE range = 0.024–0.025; %Bias range = −4.982–−3.431; r = 0.245–0.281 than the non−growing season (RMSE range = 0.011–0.12; %Bias range = 33.073–32.701; r = 0.161–0.244 for SMD estimation.

  8. Moving-Target Position Estimation Using GPU-Based Particle Filter for IoT Sensing Applications

    Directory of Open Access Journals (Sweden)

    Seongseop Kim

    2017-11-01

    Full Text Available A particle filter (PF has been introduced for effective position estimation of moving targets for non-Gaussian and nonlinear systems. The time difference of arrival (TDOA method using acoustic sensor array has normally been used to for estimation by concealing the location of a moving target, especially underwater. In this paper, we propose a GPU -based acceleration of target position estimation using a PF and propose an efficient system and software architecture. The proposed graphic processing unit (GPU-based algorithm has more advantages in applying PF signal processing to a target system, which consists of large-scale Internet of Things (IoT-driven sensors because of the parallelization which is scalable. For the TDOA measurement from the acoustic sensor array, we use the generalized cross correlation phase transform (GCC-PHAT method to obtain the correlation coefficient of the signal using Fast Fourier Transform (FFT, and we try to accelerate the calculations of GCC-PHAT based TDOA measurements using FFT with GPU compute unified device architecture (CUDA. The proposed approach utilizes a parallelization method in the target position estimation algorithm using GPU-based PF processing. In addition, it could efficiently estimate sudden movement change of the target using GPU-based parallel computing which also can be used for multiple target tracking. It also provides scalability in extending the detection algorithm according to the increase of the number of sensors. Therefore, the proposed architecture can be applied in IoT sensing applications with a large number of sensors. The target estimation algorithm was verified using MATLAB and implemented using GPU CUDA. We implemented the proposed signal processing acceleration system using target GPU to analyze in terms of execution time. The execution time of the algorithm is reduced by 55% from to the CPU standalone operation in target embedded board, NVIDIA Jetson TX1. Also, to apply large

  9. A singular K-space model for fast reconstruction of magnetic resonance images from undersampled data.

    Science.gov (United States)

    Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin

    2017-12-09

    Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.

  10. Online model-based estimation of state-of-charge and open-circuit voltage of lithium-ion batteries in electric vehicles

    International Nuclear Information System (INIS)

    He, Hongwen; Zhang, Xiaowei; Xiong, Rui; Xu, Yongli; Guo, Hongqiang

    2012-01-01

    This paper presents a method to estimate the state-of-charge (SOC) of a lithium-ion battery, based on an online identification of its open-circuit voltage (OCV), according to the battery’s intrinsic relationship between the SOC and the OCV for application in electric vehicles. Firstly an equivalent circuit model with n RC networks is employed modeling the polarization characteristic and the dynamic behavior of the lithium-ion battery, the corresponding equations are built to describe its electric behavior and a recursive function is deduced for the online identification of the OCV, which is implemented by a recursive least squares (RLS) algorithm with an optimal forgetting factor. The models with different RC networks are evaluated based on the terminal voltage comparisons between the model-based simulation and the experiment. Then the OCV-SOC lookup table is built based on the experimental data performed by a linear interpolation of the battery voltages at the same SOC during two consecutive discharge and charge cycles. Finally a verifying experiment is carried out based on nine Urban Dynamometer Driving Schedules. It indicates that the proposed method can ensure an acceptable accuracy of SOC estimation for online application with a maximum error being less than 5.0%. -- Highlights: ► An equivalent circuit model with n RC networks is built for lithium-ion batteries. ► A recursive function is deduced for the online estimation of the model parameters like OCV and R O . ► The relationship between SOC and OCV is built with a linear interpolation method by experiments. ► The experiments show the online model-based SOC estimation is reasonable with enough accuracy.

  11. Developing an objective evaluation method to estimate diabetes risk in community-based settings.

    Science.gov (United States)

    Kenya, Sonjia; He, Qing; Fullilove, Robert; Kotler, Donald P

    2011-05-01

    Exercise interventions often aim to affect abdominal obesity and glucose tolerance, two significant risk factors for type 2 diabetes. Because of limited financial and clinical resources in community and university-based environments, intervention effects are often measured with interviews or questionnaires and correlated with weight loss or body fat indicated by body bioimpedence analysis (BIA). However, self-reported assessments are subject to high levels of bias and low levels of reliability. Because obesity and body fat are correlated with diabetes at different levels in various ethnic groups, data reflecting changes in weight or fat do not necessarily indicate changes in diabetes risk. To determine how exercise interventions affect diabetes risk in community and university-based settings, improved evaluation methods are warranted. We compared a noninvasive, objective measurement technique--regional BIA--with whole-body BIA for its ability to assess abdominal obesity and predict glucose tolerance in 39 women. To determine regional BIA's utility in predicting glucose, we tested the association between the regional BIA method and blood glucose levels. Regional BIA estimates of abdominal fat area were significantly correlated (r = 0.554, P < 0.003) with fasting glucose. When waist circumference and family history of diabetes were added to abdominal fat in multiple regression models, the association with glucose increased further (r = 0.701, P < 0.001). Regional BIA estimates of abdominal fat may predict fasting glucose better than whole-body BIA as well as provide an objective assessment of changes in diabetes risk achieved through physical activity interventions in community settings.

  12. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  13. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  14. Model-based dynamic multi-parameter method for peak power estimation of lithium-ion batteries

    NARCIS (Netherlands)

    Sun, F.; Xiong, R.; He, H.; Li, W.; Aussems, J.E.E.

    2012-01-01

    A model-based dynamic multi-parameter method for peak power estimation is proposed for batteries and battery management systems (BMSs) used in hybrid electric vehicles (HEVs). The available power must be accurately calculated in order to not damage the battery by over charging or over discharging or

  15. Model-based state estimator for an intelligent tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A. P.; Schmeitz, A. J.C.; Besselink, I.; Nijmeijer, H.

    2017-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  16. Model-based State Estimator for an Intelligent Tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A.P.; Schmeitz, A.J.C.; Besselink, I.J.M.; Nijmeijer, H.

    2016-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  17. Geostatistical Model-Based Estimates of Schistosomiasis Prevalence among Individuals Aged ≤20 Years in West Africa

    Science.gov (United States)

    Schur, Nadine; Hürlimann, Eveline; Garba, Amadou; Traoré, Mamadou S.; Ndir, Omar; Ratard, Raoult C.; Tchuem Tchuenté, Louis-Albert; Kristensen, Thomas K.; Utzinger, Jürg; Vounatsou, Penelope

    2011-01-01

    Background Schistosomiasis is a water-based disease that is believed to affect over 200 million people with an estimated 97% of the infections concentrated in Africa. However, these statistics are largely based on population re-adjusted data originally published by Utroska and colleagues more than 20 years ago. Hence, these estimates are outdated due to large-scale preventive chemotherapy programs, improved sanitation, water resources development and management, among other reasons. For planning, coordination, and evaluation of control activities, it is essential to possess reliable schistosomiasis prevalence maps. Methodology We analyzed survey data compiled on a newly established open-access global neglected tropical diseases database (i) to create smooth empirical prevalence maps for Schistosoma mansoni and S. haematobium for individuals aged ≤20 years in West Africa, including Cameroon, and (ii) to derive country-specific prevalence estimates. We used Bayesian geostatistical models based on environmental predictors to take into account potential clustering due to common spatially structured exposures. Prediction at unobserved locations was facilitated by joint kriging. Principal Findings Our models revealed that 50.8 million individuals aged ≤20 years in West Africa are infected with either S. mansoni, or S. haematobium, or both species concurrently. The country prevalence estimates ranged between 0.5% (The Gambia) and 37.1% (Liberia) for S. mansoni, and between 17.6% (The Gambia) and 51.6% (Sierra Leone) for S. haematobium. We observed that the combined prevalence for both schistosome species is two-fold lower in Gambia than previously reported, while we found an almost two-fold higher estimate for Liberia (58.3%) than reported before (30.0%). Our predictions are likely to overestimate overall country prevalence, since modeling was based on children and adolescents up to the age of 20 years who are at highest risk of infection. Conclusion/Significance We

  18. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    Science.gov (United States)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  19. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Residential building energy estimation method based on the application of artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, S.; Kajl, S.

    1999-07-01

    The energy requirements of a residential building five to twenty-five stories high can be measured using a newly proposed analytical method based on artificial intelligence. The method is fast and provides a wide range of results such as total energy consumption values, power surges, and heating or cooling consumption values. A series of database were created to take into account the particularities which influence the energy consumption of a building. In this study, DOE-2 software was created for use in 8 apartment models. A total of 27 neural networks were used, 3 for the estimation of energy consumption in the corridor, and 24 for inside the apartments. Three user interfaces were created to facilitate the estimation of energy consumption. These were named the Energy Estimation Assistance System (EEAS) interfaces and are only accessible using MATLAB software. The input parameters for EEAS are: climatic region, exterior wall resistance, roofing resistance, type of windows, infiltration, number of storeys, and corridor ventilation system operating schedule. By changing the parameters, the EEAS can determine annual heating, cooling and basic energy consumption levels for apartments and corridors. 2 tabs., 2 figs.

  1. Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: a tutorial.

    Science.gov (United States)

    Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J

    2014-02-01

    Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model

  2. A Model-Based Bayesian Estimation of the Rate of Evolution of VNTR Loci in Mycobacterium tuberculosis

    Science.gov (United States)

    Aandahl, R. Zachariah; Reyes, Josephine F.; Sisson, Scott A.; Tanaka, Mark M.

    2012-01-01

    Variable numbers of tandem repeats (VNTR) typing is widely used for studying the bacterial cause of tuberculosis. Knowledge of the rate of mutation of VNTR loci facilitates the study of the evolution and epidemiology of Mycobacterium tuberculosis. Previous studies have applied population genetic models to estimate the mutation rate, leading to estimates varying widely from around to per locus per year. Resolving this issue using more detailed models and statistical methods would lead to improved inference in the molecular epidemiology of tuberculosis. Here, we use a model-based approach that incorporates two alternative forms of a stepwise mutation process for VNTR evolution within an epidemiological model of disease transmission. Using this model in a Bayesian framework we estimate the mutation rate of VNTR in M. tuberculosis from four published data sets of VNTR profiles from Albania, Iran, Morocco and Venezuela. In the first variant, the mutation rate increases linearly with respect to repeat numbers (linear model); in the second, the mutation rate is constant across repeat numbers (constant model). We find that under the constant model, the mean mutation rate per locus is (95% CI: ,)and under the linear model, the mean mutation rate per locus per repeat unit is (95% CI: ,). These new estimates represent a high rate of mutation at VNTR loci compared to previous estimates. To compare the two models we use posterior predictive checks to ascertain which of the two models is better able to reproduce the observed data. From this procedure we find that the linear model performs better than the constant model. The general framework we use allows the possibility of extending the analysis to more complex models in the future. PMID:22761563

  3. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  4. Model validation of solar PV plant with hybrid data dynamic simulation based on fast-responding generator method

    Directory of Open Access Journals (Sweden)

    Zhao Dawei

    2016-01-01

    Full Text Available In recent years, a significant number of large-scale solar photovoltaic (PV plants have been put into operation or been under planning around the world. The model accuracy of solar PV plant is the key factor to investigate the mutual influences between solar PV plants and a power grid. However, this problem has not been well solved, especially in how to apply the real measurements to validate the models of the solar PV plants. Taking fast-responding generator method as an example, this paper presents a model validation methodology for solar PV plant via the hybrid data dynamic simulation. First, the implementation scheme of hybrid data dynamic simulation suitable for DIgSILENT PowerFactory software is proposed, and then an analysis model of solar PV plant integration based on IEEE 9 system is established. At last, model validation of solar PV plant is achieved by employing hybrid data dynamic simulation. The results illustrate the effectiveness of the proposed method in solar PV plant model validation.

  5. Model-Based Battery Management Systems: From Theory to Practice

    Science.gov (United States)

    Pathak, Manan

    Lithium-ion batteries are now extensively being used as the primary storage source. Capacity and power fade, and slow recharging times are key issues that restrict its use in many applications. Battery management systems are critical to address these issues, along with ensuring its safety. This dissertation focuses on exploring various control strategies using detailed physics-based electrochemical models developed previously for lithium-ion batteries, which could be used in advanced battery management systems. Optimal charging profiles for minimizing capacity fade based on SEI-layer formation are derived and the benefits of using such control strategies are shown by experimentally testing them on a 16 Ah NMC-based pouch cell. This dissertation also explores different time-discretization strategies for non-linear models, which gives an improved order of convergence for optimal control problems. Lastly, this dissertation also explores a physics-based model for predicting the linear impedance of a battery, and develops a freeware that is extremely robust and computationally fast. Such a code could be used for estimating transport, kinetic and material properties of the battery based on the linear impedance spectra.

  6. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  7. Markov models for digraph panel data : Monte Carlo-based derivative estimation

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A. B.

    2007-01-01

    A parametric, continuous-time Markov model for digraph panel data is considered. The parameter is estimated by the method of moments. A convenient method for estimating the variance-covariance matrix of the moment estimator relies on the delta method, requiring the Jacobian matrix-that is, the

  8. Are individual based models a suitable approach to estimate population vulnerability? - a case study

    Directory of Open Access Journals (Sweden)

    Eva Maria Griebeler

    2011-04-01

    Full Text Available European populations of the Large Blue Butterfly Maculinea arion have experienced severe declines in the last decades, especially in the northern part of the species range. This endangered lycaenid butterfly needs two resources for development: flower buds of specific plants (Thymus spp., Origanum vulgare, on which young caterpillars briefly feed, and red ants of the genus Myrmica, whose nests support caterpillars during a prolonged final instar. I present an analytically solvable deterministic model to estimate the vulnerability of populations of M. arion. Results obtained from the sensitivity analysis of this mathematical model (MM are contrasted to the respective results that had been derived from a spatially explicit individual based model (IBM for this butterfly. I demonstrate that details in landscape configuration which are neglected by the MM but are easily taken into consideration by the IBM result in a different degree of intraspecific competition of caterpillars on flower buds and within host ant nests. The resulting differences in mortalities of caterpillars lead to erroneous estimates of the extinction risk of a butterfly population living in habitat with low food plant coverage and low abundance in host ant nests. This observation favors the use of an individual based modeling approach over the deterministic approach at least for the management of this threatened butterfly.

  9. Estimating nitrogen oxides emissions at city scale in China with a nightlight remote sensing model.

    Science.gov (United States)

    Jiang, Jianhui; Zhang, Jianying; Zhang, Yangwei; Zhang, Chunlong; Tian, Guangming

    2016-02-15

    Increasing nitrogen oxides (NOx) emissions over the fast developing regions have been of great concern due to their critical associations with the aggravated haze and climate change. However, little geographically specific data exists for estimating spatio-temporal trends of NOx emissions. In order to quantify the spatial and temporal variations of NOx emissions, a spatially explicit approach based on the continuous satellite observations of artificial nighttime stable lights (NSLs) from the Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) was developed to estimate NOx emissions from the largest emission source of fossil fuel combustion. The NSL based model was established with three types of data including satellite data of nighttime stable lights, geographical data of administrative boundaries, and provincial energy consumptions in China, where a significant growth of NOx emission has experienced during three policy stages corresponding to the 9th-11th)Five-Year Plan (FYP, 1995-2010). The estimated national NOx emissions increased by 8.2% per year during the study period, and the total annual NOx emissions in China estimated by the NSL-based model were approximately 4.1%-13.8% higher than the previous estimates. The spatio-temporal variations of NOx emissions at city scale were then evaluated by the Moran's I indices. The global Moran's I indices for measuring spatial agglomerations of China's NOx emission increased by 50.7% during 1995-2010. Although the inland cities have shown larger contribution to the emission growth than the more developed coastal cities since 2005, the High-High clusters of NOx emission located in Beijing-Tianjin-Hebei regions, the Yangtze River Delta, and the Pearl River Delta should still be the major focus of NOx mitigation. Our results indicate that the readily available DMSP/OLS nighttime stable lights based model could be an easily accessible and effective tool for achieving strategic decision making

  10. Influence of diurnal variation and fasting on serum iron concentrations in a community-based population.

    Science.gov (United States)

    Nguyen, Leonard T; Buse, Joshua D; Baskin, Leland; Sadrzadeh, S M Hossein; Naugler, Christopher

    2017-12-01

    Serum iron is an important clinical test to help identify cases of iron deficiency or overload. Fluctuations caused by diurnal variation and diet are thought to influence test results, which may affect clinical patient management. We examined the impact of these preanalytical factors on iron concentrations in a large community-based cohort. Serum iron concentration, blood collection time, fasting duration, patient age and sex were obtained for community-based clinical testing from the Laboratory Information Service at Calgary Laboratory Services for the period of January 2011 to December 2015. A total of 276,307 individual test results were obtained. Iron levels were relatively high over a long period from 8:00 to 15:00. Mean concentrations were highest at blood collection times of 11:00 for adult men and 12:00 for adult women and children, however iron levels peaked as late as 15:00 in teenagers. With regard to fasting, iron levels required approximately 5h post-prandial time to return to a baseline, except for children and teenage females where no significant variation was seen until after 11h fasting. After 10h fasting, iron concentrations in all patient groups gradually increased to higher levels compared to earlier fasting times. Serum iron concentrations remain reasonably stable during most daytime hours for testing purposes. In adults, blood collection after 5 to 9h fasting provides a representative estimate of a patient's iron levels. For patients who have fasted overnight, i.e. ≥12h fasting, clinicians should be aware that iron concentrations may be elevated beyond otherwise usual levels. Copyright © 2017. Published by Elsevier Inc.

  11. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    International Nuclear Information System (INIS)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K.

    2016-01-01

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  12. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K. [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2016-03-15

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  13. Decentralized State-Observer-Based Traffic Density Estimation of Large-Scale Urban Freeway Network by Dynamic Model

    Directory of Open Access Journals (Sweden)

    Yuqi Guo

    2017-08-01

    Full Text Available In order to estimate traffic densities in a large-scale urban freeway network in an accurate and timely fashion when traffic sensors do not cover the freeway network completely and thus only local measurement data can be utilized, this paper proposes a decentralized state observer approach based on a macroscopic traffic flow model. Firstly, by using the well-known cell transmission model (CTM, the urban freeway network is modeled in the way of distributed systems. Secondly, based on the model, a decentralized observer is designed. With the help of the Lyapunov function and S-procedure theory, the observer gains are computed by using linear matrix inequality (LMI technique. So, the traffic densities of the whole road network can be estimated by the designed observer. Finally, this method is applied to the outer ring of the Beijing’s second ring road and experimental results demonstrate the effectiveness and applicability of the proposed approach.

  14. An operational weather radar-based Quantitative Precipitation Estimation and its application in catchment water resources modeling

    DEFF Research Database (Denmark)

    He, Xin; Vejen, Flemming; Stisen, Simon

    2011-01-01

    of precipitation compared with rain-gauge-based methods, thus providing the basis for better water resources assessments. The radar QPE algorithm called ARNE is a distance-dependent areal estimation method that merges radar data with ground surface observations. The method was applied to the Skjern River catchment...... in western Denmark where alternative precipitation estimates were also used as input to an integrated hydrologic model. The hydrologic responses from the model were analyzed by comparing radar- and ground-based precipitation input scenarios. Results showed that radar QPE products are able to generate...... reliable simulations of stream flow and water balance. The potential of using radar-based precipitation was found to be especially high at a smaller scale, where the impact of spatial resolution was evident from the stream discharge results. Also, groundwater recharge was shown to be sensitive...

  15. Estimation of Thermal Sensation Based on Wrist Skin Temperatures

    Science.gov (United States)

    Sim, Soo Young; Koh, Myung Jun; Joo, Kwang Min; Noh, Seungwoo; Park, Sangyun; Kim, Youn Ho; Park, Kwang Suk

    2016-01-01

    Thermal comfort is an essential environmental factor related to quality of life and work effectiveness. We assessed the feasibility of wrist skin temperature monitoring for estimating subjective thermal sensation. We invented a wrist band that simultaneously monitors skin temperatures from the wrist (i.e., the radial artery and ulnar artery regions, and upper wrist) and the fingertip. Skin temperatures from eight healthy subjects were acquired while thermal sensation varied. To develop a thermal sensation estimation model, the mean skin temperature, temperature gradient, time differential of the temperatures, and average power of frequency band were calculated. A thermal sensation estimation model using temperatures of the fingertip and wrist showed the highest accuracy (mean root mean square error [RMSE]: 1.26 ± 0.31). An estimation model based on the three wrist skin temperatures showed a slightly better result to the model that used a single fingertip skin temperature (mean RMSE: 1.39 ± 0.18). When a personalized thermal sensation estimation model based on three wrist skin temperatures was used, the mean RMSE was 1.06 ± 0.29, and the correlation coefficient was 0.89. Thermal sensation estimation technology based on wrist skin temperatures, and combined with wearable devices may facilitate intelligent control of one’s thermal environment. PMID:27023538

  16. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  17. Computationally fast estimation of muscle tension for realtime bio-feedback.

    Science.gov (United States)

    Murai, Akihiko; Kurosaki, Kosuke; Yamane, Katsu; Nakamura, Yoshihiko

    2009-01-01

    In this paper, we propose a method for realtime estimation of whole-body muscle tensions. The main problem of muscle tension estimation is that there are infinite number of solutions to realize a particular joint torque due to the actuation redundancy. Numerical optimization techniques, e.g. quadratic programming, are often employed to obtain a unique solution, but they are usually computationally expensive. For example, our implementation of quadratic programming takes about 0.17 sec per frame on the musculoskeletal model with 274 elements, which is far from realtime computation. Here, we propose to reduce the computational cost by using EMG data and by reducing the number of unknowns in the optimization. First, we compute the tensions of muscles with surface EMG data based on a biological muscle data, which is a very efficient process. We also assume that their synergists have the same activity levels and compute their tensions with the same model. Tensions of the remaining muscles are then computed using quadratic programming, but the number of unknowns is significantly reduced by assuming that the muscles in the same heteronymous group have the same activity level. The proposed method realizes realtime estimation and visualization of the whole-body muscle tensions that can be applied to sports training and rehabilitation.

  18. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  19. Are Live Ultrasound Models Replaceable? Traditional vs. Simulated Education Module for FAST

    Directory of Open Access Journals (Sweden)

    Suzanne Bentley

    2015-10-01

    (p<0.001. There was no significant difference between groups on OSCE scores of FAST on a live model. Overall, no differences were demonstrated between groups trained on human models versus simulator. Discussion: There was no difference between groups in knowledge based ultrasound test scores, survey of comfort levels with ultrasound, and students’ abilities to perform and interpret FAST on human models. Conclusion: These findings suggest that an ultrasound simulator is a suitable alternative method for ultrasound education. Additional uses of ultrasound simulation should be explored in the future.

  20. Hardware architecture design of a fast global motion estimation method

    Science.gov (United States)

    Liang, Chaobing; Sang, Hongshi; Shen, Xubang

    2015-12-01

    VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.

  1. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  2. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  3. Fast humidity sensors based on CeO2 nanowires

    International Nuclear Information System (INIS)

    Fu, X Q; Wang, C; Yu, H C; Wang, Y G; Wang, T H

    2007-01-01

    Fast humidity sensors are reported that are based on CeO 2 nanowires synthesized by a hydrothermal method. Both the response and recovery time are about 3 s, and are independent of the humidity. The sensitivity increases gradually as the humidity increases, and is up to 85 at 97% RH. The resistance decreases exponentially with increasing humidity, implying ion-type conductivity as the humidity sensing mechanism. A model based on the morphology and surface energy of the nanowires is given to explain these results further. Our experimental results indicate a pathway to improving the performance of humidity sensors

  4. Nuclear reaction models - source term estimation for safety design in accelerators

    International Nuclear Information System (INIS)

    Nandy, Maitreyee

    2013-01-01

    Accelerator driven subcritical system (ADSS) employs proton induced spallation reaction at a few GeV. Safety design of these systems involves source term estimation in two steps - multiple fragmentation of the target and n+γ emission through a fast process followed by statistical decay of the primary fragments. The prompt radiation field is estimated in the framework of quantum molecular dynamics (QMD) theory, intra-nuclear cascade or Monte Carlo calculations. A few nuclear reaction model codes used for this purpose are QMD, JQMD, Bertini, INCL4, PHITS, followed by statistical decay codes like ABLA, GEM, GEMINI, etc. In the case of electron accelerators photons and photoneutrons dominate the prompt radiation field. High energy photon yield through Bremsstrahlung is estimated in the framework of Born approximation while photoneutron production is calculated using giant dipole resonance and quasi-deuteron formation cross section. In this talk hybrid and exciton PEQ models and QMD formalism will be discussed briefly

  5. Multiple data fusion for rainfall estimation using a NARX-based recurrent neural network – the development of the REIINN model

    International Nuclear Information System (INIS)

    Ang, M R C O; Gonzalez, R M; Castro, P P M

    2014-01-01

    Rainfall, one of the important elements of the hydrologic cycle, is also the most difficult to model. Thus, accurate rainfall estimation is necessary especially in localized catchment areas where variability of rainfall is extremely high. Moreover, early warning of severe rainfall through timely and accurate estimation and forecasting could help prevent disasters from flooding. This paper presents the development of two rainfall estimation models that utilize a NARX-based neural network architecture namely: REIINN 1 and REIINN 2. These REIINN models, or Rainfall Estimation by Information Integration using Neural Networks, were trained using MTSAT cloud-top temperature (CTT) images and rainfall rates from the combined rain gauge and TMPA 3B40RT datasets. Model performance was assessed using two metrics – root mean square error (RMSE) and correlation coefficient (R). REIINN 1 yielded an RMSE of 8.1423 mm/3h and an overall R of 0.74652 while REIINN 2 yielded an RMSE of 5.2303 and an overall R of 0.90373. The results, especially that of REIINN 2, are very promising for satellite-based rainfall estimation in a catchment scale. It is believed that model performance and accuracy will greatly improve with a denser and more spatially distributed in-situ rainfall measurements to calibrate the model with. The models proved the viability of using remote sensing images, with their good spatial coverage, near real time availability, and relatively inexpensive to acquire, as an alternative source for rainfall estimation to complement existing ground-based measurements

  6. The fast debris evolution model

    Science.gov (United States)

    Lewis, H. G.; Swinerd, G. G.; Newland, R. J.; Saunders, A.

    2009-09-01

    The 'particles-in-a-box' (PIB) model introduced by Talent [Talent, D.L. Analytic model for orbital debris environmental management. J. Spacecraft Rocket, 29 (4), 508-513, 1992.] removed the need for computer-intensive Monte Carlo simulation to predict the gross characteristics of an evolving debris environment. The PIB model was described using a differential equation that allows the stability of the low Earth orbit (LEO) environment to be tested by a straightforward analysis of the equation's coefficients. As part of an ongoing research effort to investigate more efficient approaches to evolutionary modelling and to develop a suite of educational tools, a new PIB model has been developed. The model, entitled Fast Debris Evolution (FADE), employs a first-order differential equation to describe the rate at which new objects ⩾10 cm are added and removed from the environment. Whilst Talent [Talent, D.L. Analytic model for orbital debris environmental management. J. Spacecraft Rocket, 29 (4), 508-513, 1992.] based the collision theory for the PIB approach on collisions between gas particles and adopted specific values for the parameters of the model from a number of references, the form and coefficients of the FADE model equations can be inferred from the outputs of future projections produced by high-fidelity models, such as the DAMAGE model. The FADE model has been implemented as a client-side, web-based service using JavaScript embedded within a HTML document. Due to the simple nature of the algorithm, FADE can deliver the results of future projections immediately in a graphical format, with complete user-control over key simulation parameters. Historical and future projections for the ⩾10 cm LEO debris environment under a variety of different scenarios are possible, including business as usual, no future launches, post-mission disposal and remediation. A selection of results is presented with comparisons with predictions made using the DAMAGE environment model

  7. Gradient-based stochastic estimation of the density matrix

    Science.gov (United States)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  8. Combining an Electrothermal and Impedance Aging Model to Investigate Thermal Degradation Caused by Fast Charging

    Directory of Open Access Journals (Sweden)

    Joris de Hoog

    2018-03-01

    Full Text Available Fast charging is an exciting topic in the field of electric and hybrid electric vehicles (EVs/HEVs. In order to achieve faster charging times, fast-charging applications involve high-current profiles which can lead to high cell temperature increase, and in some cases thermal runaways. There has been some research on the impact caused by fast-charging profiles. This research is mostly focused on the electrical, thermal and aging aspects of the cell individually, but these factors are never treated together. In this paper, the thermal progression of the lithium-ion battery under specific fast-charging profiles is investigated and modeled. The cell is a Lithium Nickel Manganese Cobalt Oxide/graphite-based cell (NMC rated at 20 Ah, and thermal images during fast-charging have been taken at four degradation states: 100%, 90%, 85%, and 80% State-of-Health (SoH. A semi-empirical resistance aging model is developed using gathered data from extensive cycling and calendar aging tests, which is coupled to an electrothermal model. This novel combined model achieves good agreement with the measurements, with simulation results always within 2 °C of the measured values. This study presents a modeling methodology that is usable to predict the potential temperature distribution for lithium-ion batteries (LiBs during fast-charging profiles at different aging states, which would be of benefit for Battery Management Systems (BMS in future thermal strategies.

  9. INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS

    Science.gov (United States)

    Hong, Sungjoon; Oguchi, Takashi

    In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.

  10. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  11. Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation

    Science.gov (United States)

    Ekin Aydin, Boran; Rutten, Martine

    2016-04-01

    Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.

  12. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    Directory of Open Access Journals (Sweden)

    Afnizanfaizal Abdullah

    Full Text Available The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  13. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    Science.gov (United States)

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  14. A method for the fast estimation of a battery entropy-variation high-resolution curve - Application on a commercial LiFePO4/graphite cell

    Science.gov (United States)

    Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy

    2016-11-01

    The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.

  15. Very Fast Estimation of Epicentral Distance and Magnitude from a Single Three Component Seismic Station Using Machine Learning Techniques

    Science.gov (United States)

    Ochoa Gutierrez, L. H.; Niño Vasquez, L. F.; Vargas-Jimenez, C. A.

    2012-12-01

    To minimize adverse effects originated by high magnitude earthquakes, early warning has become a powerful tool to anticipate a seismic wave arrival to an specific location and lets to bring people and government agencies opportune information to initiate a fast response. To do this, a very fast and accurate characterization of the event must be done but this process is often made using seismograms recorded in at least 4 stations where processing time is usually greater than the wave travel time to the interest area, mainly in coarse networks. A faster process can be done if only one three component seismic station is used that is the closest unsaturated station respect to the epicenter. Here we present a Support Vector Regression algorithm which calculates Magnitude and Epicentral Distance using only 5 seconds of signal since P wave onset. This algorithm was trained with 36 records of historical earthquakes where the input were regression parameters of an exponential function estimated by least squares, corresponding to the waveform envelope and the maximum value of the observed waveform for each component in one single station. A 10 fold Cross Validation was applied for a Normalized Polynomial Kernel obtaining the mean absolute error for different exponents and complexity parameters. Magnitude could be estimated with 0.16 of mean absolute error and the distance with an error of 7.5 km for distances within 60 to 120 km. This kind of algorithm is easy to implement in hardware and can be used directly in the field station to make possible the broadcast of estimations of this values to generate fast decisions at seismological control centers, increasing the possibility to have an effective reactiontribute and Descriptors calculator for SVR model training and test

  16. Model reduction for slow–fast stochastic systems with metastable behaviour

    International Nuclear Information System (INIS)

    Bruna, Maria; Chapman, S. Jonathan; Smith, Matthew J.

    2014-01-01

    The quasi-steady-state approximation (or stochastic averaging principle) is a useful tool in the study of multiscale stochastic systems, giving a practical method by which to reduce the number of degrees of freedom in a model. The method is extended here to slow–fast systems in which the fast variables exhibit metastable behaviour. The key parameter that determines the form of the reduced model is the ratio of the timescale for the switching of the fast variables between metastable states to the timescale for the evolution of the slow variables. The method is illustrated with two examples: one from biochemistry (a fast-species-mediated chemical switch coupled to a slower varying species), and one from ecology (a predator–prey system). Numerical simulations of each model reduction are compared with those of the full system

  17. Statistical inference based on latent ability estimates

    NARCIS (Netherlands)

    Hoijtink, H.J.A.; Boomsma, A.

    The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use

  18. Application of semi-empirical modeling and non-linear regression to unfolding fast neutron spectra from integral reaction rate data

    International Nuclear Information System (INIS)

    Harker, Y.D.

    1976-01-01

    A semi-empirical analytical expression representing a fast reactor neutron spectrum has been developed. This expression was used in a non-linear regression computer routine to obtain from measured multiple foil integral reaction data the neutron spectrum inside the Coupled Fast Reactivity Measurement Facility. In this application six parameters in the analytical expression for neutron spectrum were adjusted in the non-linear fitting process to maximize consistency between calculated and measured integral reaction rates for a set of 15 dosimetry detector foils. In two-thirds of the observations the calculated integral agreed with its respective measured value to within the experimental standard deviation, and in all but one case agreement within two standard deviations was obtained. Based on this quality of fit the estimated 70 to 75 percent confidence intervals for the derived spectrum are 10 to 20 percent for the energy range 100 eV to 1 MeV, 10 to 50 percent for 1 MeV to 10 MeV and 50 to 90 percent for 10 MeV to 18 MeV. The analytical model has demonstrated a flexibility to describe salient features of neutron spectra of the fast reactor type. The use of regression analysis with this model has produced a stable method to derive neutron spectra from a limited amount of integral data

  19. Development of Web-Based RECESS Model for Estimating Baseflow Using SWAT

    Directory of Open Access Journals (Sweden)

    Gwanjae Lee

    2014-04-01

    Full Text Available Groundwater has received increasing attention as an important strategic water resource for adaptation to climate change. In this regard, the separation of baseflow from streamflow and the analysis of recession curves make a significant contribution to integrated river basin management. The United States Geological Survey (USGS RECESS model adopting the master-recession curve (MRC method can enhance the accuracy with which baseflow may be separated from streamflow, compared to other baseflow-separation schemes that are more limited in their ability to reflect various watershed/aquifer characteristics. The RECESS model has been widely used for the analysis of hydrographs, but the applications using RECESS were only available through Microsoft-Disk Operating System (MS-DOS. Thus, this study aims to develop a web-based RECESS model for easy separation of baseflow from streamflow, with easy applications for ungauged regions. RECESS on the web derived the alpha factor, which is a baseflow recession constant in the Soil Water Assessment Tool (SWAT, and this variable was provided to SWAT as the input. The results showed that the alpha factor estimated from the web-based RECESS model improved the predictions of streamflow and recession. Furthermore, these findings showed that the baseflow characteristics of the ungauged watersheds were influenced by the land use and slope angle of watersheds, as well as by precipitation and streamflow.

  20. V and V-based remaining fault estimation model for safety–critical software of a nuclear power plant

    International Nuclear Information System (INIS)

    Eom, Heung-seop; Park, Gee-yong; Jang, Seung-cheol; Son, Han Seong; Kang, Hyun Gook

    2013-01-01

    Highlights: ► A software fault estimation model based on Bayesian Nets and V and V. ► Use of quantified data derived from qualitative V and V results. ► Faults insertion and elimination process was modeled in the context of probability. ► Systematically estimates the expected number of remaining faults. -- Abstract: Quantitative software reliability measurement approaches have some limitations in demonstrating the proper level of reliability in cases of safety–critical software. One of the more promising alternatives is the use of software development quality information. Particularly in the nuclear industry, regulatory bodies in most countries use both probabilistic and deterministic measures for ensuring the reliability of safety-grade digital computers in NPPs. The point of deterministic criteria is to assess the whole development process and its related activities during the software development life cycle for the acceptance of safety–critical software. In addition software Verification and Validation (V and V) play an important role in this process. In this light, we propose a V and V-based fault estimation method using Bayesian Nets to estimate the remaining faults for safety–critical software after the software development life cycle is completed. By modeling the fault insertion and elimination processes during the whole development phases, the proposed method systematically estimates the expected number of remaining faults.

  1. View Estimation Based on Value System

    Science.gov (United States)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  2. Estimation of vegetation photosynthetic capacity from space-based measurements of chlorophyll fluorescence for terrestrial biosphere models.

    Science.gov (United States)

    Zhang, Yongguang; Guanter, Luis; Berry, Joseph A; Joiner, Joanna; van der Tol, Christiaan; Huete, Alfredo; Gitelson, Anatoly; Voigt, Maximilian; Köhler, Philipp

    2014-12-01

    Photosynthesis simulations by terrestrial biosphere models are usually based on the Farquhar's model, in which the maximum rate of carboxylation (Vcmax ) is a key control parameter of photosynthetic capacity. Even though Vcmax is known to vary substantially in space and time in response to environmental controls, it is typically parameterized in models with tabulated values associated to plant functional types. Remote sensing can be used to produce a spatially continuous and temporally resolved view on photosynthetic efficiency, but traditional vegetation observations based on spectral reflectance lack a direct link to plant photochemical processes. Alternatively, recent space-borne measurements of sun-induced chlorophyll fluorescence (SIF) can offer an observational constraint on photosynthesis simulations. Here, we show that top-of-canopy SIF measurements from space are sensitive to Vcmax at the ecosystem level, and present an approach to invert Vcmax from SIF data. We use the Soil-Canopy Observation of Photosynthesis and Energy (SCOPE) balance model to derive empirical relationships between seasonal Vcmax and SIF which are used to solve the inverse problem. We evaluate our Vcmax estimation method at six agricultural flux tower sites in the midwestern US using spaced-based SIF retrievals. Our Vcmax estimates agree well with literature values for corn and soybean plants (average values of 37 and 101 μmol m(-2)  s(-1) , respectively) and show plausible seasonal patterns. The effect of the updated seasonally varying Vcmax parameterization on simulated gross primary productivity (GPP) is tested by comparing to simulations with fixed Vcmax values. Validation against flux tower observations demonstrate that simulations of GPP and light use efficiency improve significantly when our time-resolved Vcmax estimates from SIF are used, with R(2) for GPP comparisons increasing from 0.85 to 0.93, and for light use efficiency from 0.44 to 0.83. Our results support the use of

  3. A Kalman-based Fundamental Frequency Estimation Algorithm

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    Fundamental frequency estimation is an important task in speech and audio analysis. Harmonic model-based methods typically have superior estimation accuracy. However, such methods usually as- sume that the fundamental frequency and amplitudes are station- ary over a short time frame. In this pape...

  4. Development of Mathematical Model and Analysis Code for Estimating Drop Behavior of the Control Rod Assembly in the Sodium Cooled Fast Reactor

    International Nuclear Information System (INIS)

    Oh, Se-Hong; Kang, SeungHoon; Choi, Choengryul; Yoon, Kyung Ho; Cheon, Jin Sik

    2016-01-01

    On receiving the scram signal, the control rod assemblies are released to fall into the reactor core by its weight. Thus drop time and falling velocity of the control rod assembly must be estimated for the safety evaluation. There are three typical ways to estimate the drop behavior of the control rod assembly in scram action: Experimental, numerical and theoretical methods. But experimental and numerical(CFD) method require a lot of cost and time. Thus, these methods are difficult to apply to the initial design process. In this study, mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. Mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. A simplified control rod assembly model is considered to minimize the uncertainty in the development process. And the hydraulic circuit analysis technique is adopted to evaluate the internal/external flow distribution of the control rod assembly. Finally, the theoretical analysis code(named as HEXCON) has been developed based on the mathematical model. To verify the reliability of the developed code, CFD analysis has been conducted. And a calculation using the developed analysis code was carried out under the same condition, and both results were compared

  5. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  6. CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-04

    Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.

  7. Type-specific human papillomavirus biological features: validated model-based estimates.

    Directory of Open Access Journals (Sweden)

    Iacopo Baussano

    Full Text Available Infection with high-risk (hr human papillomavirus (HPV is considered the necessary cause of cervical cancer. Vaccination against HPV16 and 18 types, which are responsible of about 75% of cervical cancer worldwide, is expected to have a major global impact on cervical cancer occurrence. Valid estimates of the parameters that regulate the natural history of hrHPV infections are crucial to draw reliable projections of the impact of vaccination. We devised a mathematical model to estimate the probability of infection transmission, the rate of clearance, and the patterns of immune response following the clearance of infection of 13 hrHPV types. To test the validity of our estimates, we fitted the same transmission model to two large independent datasets from Italy and Sweden and assessed finding consistency. The two populations, both unvaccinated, differed substantially by sexual behaviour, age distribution, and study setting (screening for cervical cancer or Chlamydia trachomatis infection. Estimated transmission probability of hrHPV types (80% for HPV16, 73%-82% for HPV18, and above 50% for most other types; clearance rates decreasing as a function of time since infection; and partial protection against re-infection with the same hrHPV type (approximately 20% for HPV16 and 50% for the other types were similar in the two countries. The model could accurately predict the HPV16 prevalence observed in Italy among women who were not infected three years before. In conclusion, our models inform on biological parameters that cannot at the moment be measured directly from any empirical data but are essential to forecast the impact of HPV vaccination programmes.

  8. Estimation model for evaporative emissions from gasoline vehicles based on thermodynamics.

    Science.gov (United States)

    Hata, Hiroo; Yamada, Hiroyuki; Kokuryo, Kazuo; Okada, Megumi; Funakubo, Chikage; Tonokura, Kenichi

    2018-03-15

    In this study, we conducted seven-day diurnal breathing loss (DBL) tests on gasoline vehicles. We propose a model based on the theory of thermodynamics that can represent the experimental results of the current and previous studies. The experiments were performed using 14 physical parameters to determine the dependence of total emissions on temperature, fuel tank fill, and fuel vapor pressure. In most cases, total emissions after an apparent breakthrough were proportional to the difference between minimum and maximum environmental temperatures during the day, fuel tank empty space, and fuel vapor pressure. Volatile organic compounds (VOCs) were measured using a Gas Chromatography Mass Spectrometer and Flame Ionization Detector (GC-MS/FID) to determine the Ozone Formation Potential (OFP) of after-breakthrough gas emitted to the atmosphere. Using the experimental results, we constructed a thermodynamic model for estimating the amount of evaporative emissions after a fully saturated canister breakthrough occurred, and a comparison between the thermodynamic model and previous models was made. Finally, the total annual evaporative emissions and OFP in Japan were determined and compared by each model. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Model-dependent estimate on the connection between fast radio bursts and ultra high energy cosmic rays

    International Nuclear Information System (INIS)

    Li, Xiang; Zhou, Bei; He, Hao-Ning; Fan, Yi-Zhong; Wei, Da-Ming

    2014-01-01

    The existence of fast radio bursts (FRBs), a new type of extragalatic transient, has recently been established, and quite a few models have been proposed. In this work, we discuss the possible connection between the FRB sources and ultra high energy (>10 18 eV) cosmic rays. We show that in the blitzar model and the model of merging binary neutron stars, which includes the huge energy release of each FRB central engine together with the rather high rate of FRBs, the accelerated EeV cosmic rays may contribute significantly to the observed ones. In other FRB models, including, for example, the merger of double white dwarfs and the energetic magnetar radio flares, no significant EeV cosmic ray is expected. We also suggest that the mergers of double neutron stars, even if they are irrelevant to FRBs, may play a nonignorable role in producing EeV cosmic ray protons if supramassive neutron stars are formed in a sufficient fraction of mergers and the merger rate is ≳ 10 3 yr –1 Gpc –3 . Such a possibility will be unambiguously tested in the era of gravitational wave astronomy.

  10. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    Science.gov (United States)

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  11. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  12. Comparisons of Modeling and State of Charge Estimation for Lithium-Ion Battery Based on Fractional Order and Integral Order Methods

    Directory of Open Access Journals (Sweden)

    Renxin Xiao

    2016-03-01

    Full Text Available In order to properly manage lithium-ion batteries of electric vehicles (EVs, it is essential to build the battery model and estimate the state of charge (SOC. In this paper, the fractional order forms of Thevenin and partnership for a new generation of vehicles (PNGV models are built, of which the model parameters including the fractional orders and the corresponding resistance and capacitance values are simultaneously identified based on genetic algorithm (GA. The relationships between different model parameters and SOC are established and analyzed. The calculation precisions of the fractional order model (FOM and integral order model (IOM are validated and compared under hybrid test cycles. Finally, extended Kalman filter (EKF is employed to estimate the SOC based on different models. The results prove that the FOMs can simulate the output voltage more accurately and the fractional order EKF (FOEKF can estimate the SOC more precisely under dynamic conditions.

  13. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Risk Probability Estimating Based on Clustering

    DEFF Research Database (Denmark)

    Chen, Yong; Jensen, Christian D.; Gray, Elizabeth

    2003-01-01

    of prior experiences, recommendations from a trusted entity or the reputation of the other entity. In this paper we propose a dynamic mechanism for estimating the risk probability of a certain interaction in a given environment using hybrid neural networks. We argue that traditional risk assessment models...... from the insurance industry do not directly apply to ubiquitous computing environments. Instead, we propose a dynamic mechanism for risk assessment, which is based on pattern matching, classification and prediction procedures. This mechanism uses an estimator of risk probability, which is based...

  15. Life cycle assessment based environmental impact estimation model for pre-stressed concrete beam bridge in the early design phase

    International Nuclear Information System (INIS)

    Kim, Kyong Ju; Yun, Won Gun; Cho, Namho; Ha, Jikwang

    2017-01-01

    The late rise in global concern for environmental issues such as global warming and air pollution is accentuating the need for environmental assessments in the construction industry. Promptly evaluating the environmental loads of the various design alternatives during the early stages of a construction project and adopting the most environmentally sustainable candidate is therefore of large importance. Yet, research on the early evaluation of a construction project's environmental load in order to aid the decision making process is hitherto lacking. In light of this dilemma, this study proposes a model for estimating the environmental load by employing only the most basic information accessible during the early design phases of a project for the pre-stressed concrete (PSC) beam bridge, the most common bridge structure. Firstly, a life cycle assessment (LCA) was conducted on the data from 99 bridges by integrating the bills of quantities (BOQ) with a life cycle inventory (LCI) database. The processed data was then utilized to construct a case based reasoning (CBR) model for estimating the environmental load. The accuracy of the estimation model was then validated using five test cases; the model's mean absolute error rates (MAER) for the total environmental load was calculated as 7.09%. Such test results were shown to be superior compared to those obtained from a multiple-regression based model and a slab area base-unit analysis model. Henceforth application of this model during the early stages of a project is expected to highly complement environmentally friendly designs and construction by facilitating the swift evaluation of the environmental load from multiple standpoints. - Highlights: • This study is to develop the model of assessing the environmental impacts on LCA. • Bills of quantity from completed designs of PSC Beam were linked with the LCI DB. • Previous cases were used to estimate the environmental load of new case by CBR model. • CBR

  16. Life cycle assessment based environmental impact estimation model for pre-stressed concrete beam bridge in the early design phase

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyong Ju, E-mail: kjkim@cau.ac.kr; Yun, Won Gun, E-mail: ogun78@naver.com; Cho, Namho, E-mail: nhc51@cau.ac.kr; Ha, Jikwang, E-mail: wlrhkd29@gmail.com

    2017-05-15

    The late rise in global concern for environmental issues such as global warming and air pollution is accentuating the need for environmental assessments in the construction industry. Promptly evaluating the environmental loads of the various design alternatives during the early stages of a construction project and adopting the most environmentally sustainable candidate is therefore of large importance. Yet, research on the early evaluation of a construction project's environmental load in order to aid the decision making process is hitherto lacking. In light of this dilemma, this study proposes a model for estimating the environmental load by employing only the most basic information accessible during the early design phases of a project for the pre-stressed concrete (PSC) beam bridge, the most common bridge structure. Firstly, a life cycle assessment (LCA) was conducted on the data from 99 bridges by integrating the bills of quantities (BOQ) with a life cycle inventory (LCI) database. The processed data was then utilized to construct a case based reasoning (CBR) model for estimating the environmental load. The accuracy of the estimation model was then validated using five test cases; the model's mean absolute error rates (MAER) for the total environmental load was calculated as 7.09%. Such test results were shown to be superior compared to those obtained from a multiple-regression based model and a slab area base-unit analysis model. Henceforth application of this model during the early stages of a project is expected to highly complement environmentally friendly designs and construction by facilitating the swift evaluation of the environmental load from multiple standpoints. - Highlights: • This study is to develop the model of assessing the environmental impacts on LCA. • Bills of quantity from completed designs of PSC Beam were linked with the LCI DB. • Previous cases were used to estimate the environmental load of new case by CBR model. • CBR

  17. Linking lifestyle factors and insulin resistance, based on fasting plasma insulin and HOMA-IR in middle-aged Japanese men: a cross-sectional study.

    Science.gov (United States)

    Otake, Toshie; Fukumoto, Jin; Abe, Masao; Takemura, Shigeki; Mihn, Pham Ngoc; Mizoue, Tetsuya; Kiyohara, Chikako

    2014-09-01

    Insulin resistance (IR) is regarded as one of the earliest features of many metabolic diseases, and major efforts are aimed at improving insulin function to confront this issue. The aim of this study was to investigate the relationship of body mass index (BMI), cigarette smoking, alcohol intake, physical activity, green tea and coffee consumption to IR. We performed a cross-sectional study of 1542 male self defense officials. IR was defined as the highest quartile of the fasting plasma insulin (≥ 50 pmol/L) or the homeostasis model assessment-estimated IR (HOMA-IR ≥ 1.81). An unconditional logistic model was used to estimate the odds ratio (OR) and 95% confidence interval (CI) for the association between IR and influential factors. Stratified analysis by obesity status (BMI IR was significantly positively related to BMI and glucose tolerance, negatively related to alcohol use. Independent of obesity status, significant trends were observed between IR and alcohol use. Drinking 30 mL or more of ethanol per day reduced IR by less than 40%. Strong physical activity was associated with decreased risk of IR based on fasting plasma insulin only in the obese. Coffee consumption was inversely associated with the risk of IR based on HOMA-IR in the non-obese group. Higher coffee consumption may be protective against IR among only the non-obese. Further studies are warranted to examine the effect modification of the obesity status on the coffee-IR association.

  18. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  19. A Novel Physiology-Based Mathematical Model to Estimate Red Blood Cell Lifespan in Different Human Age Groups.

    Science.gov (United States)

    An, Guohua; Widness, John A; Mock, Donald M; Veng-Pedersen, Peter

    2016-09-01

    Direct measurement of red blood cell (RBC) survival in humans has improved from the original accurate but limited differential agglutination technique to the current reliable, safe, and accurate biotin method. Despite this, all of these methods are time consuming and require blood sampling over several months to determine the RBC lifespan. For situations in which RBC survival information must be obtained quickly, these methods are not suitable. With the exception of adults and infants, RBC survival has not been extensively investigated in other age groups. To address this need, we developed a novel, physiology-based mathematical model that quickly estimates RBC lifespan in healthy individuals at any age. The model is based on the assumption that the total number of RBC recirculations during the lifespan of each RBC (denoted by N max) is relatively constant for all age groups. The model was initially validated using the data from our prior infant and adult biotin-labeled red blood cell studies and then extended to the other age groups. The model generated the following estimated RBC lifespans in 2-year-old, 5-year-old, 8-year-old, and 10-year-old children: 62, 74, 82, and 86 days, respectively. We speculate that this model has useful clinical applications. For example, HbA1c testing is not reliable in identifying children with diabetes because HbA1c is directly affected by RBC lifespan. Because our model can estimate RBC lifespan in children at any age, corrections to HbA1c values based on the model-generated RBC lifespan could improve diabetes diagnosis as well as therapy in children.

  20. A Model-Driven Approach for Hybrid Power Estimation in Embedded Systems Design

    Directory of Open Access Journals (Sweden)

    Ben Atitallah Rabie

    2011-01-01

    Full Text Available Abstract As technology scales for increased circuit density and performance, the management of power consumption in system-on-chip (SoC is becoming critical. Today, having the appropriate electronic system level (ESL tools for power estimation in the design flow is mandatory. The main challenge for the design of such dedicated tools is to achieve a better tradeoff between accuracy and speed. This paper presents a consumption estimation approach allowing taking the consumption criterion into account early in the design flow during the system cosimulation. The originality of this approach is that it allows the power estimation for both white-box intellectual properties (IPs using annotated power models and black-box IPs using standalone power estimators. In order to obtain accurate power estimates, our simulations were performed at the cycle-accurate bit-accurate (CABA level, using SystemC. To make our approach fast and not tedious for users, the simulated architectures, including standalone power estimators, were generated automatically using a model driven engineering (MDE approach. Both annotated power models and standalone power estimators can be used together to estimate the consumption of the same architecture, which makes them complementary. The simulation results showed that the power estimates given by both estimation techniques for a hardware component are very close, with a difference that does not exceed 0.3%. This proves that, even when the IP code is not accessible or not modifiable, our approach allows obtaining quite accurate power estimates that early in the design flow thanks to the automation offered by the MDE approach.

  1. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  2. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  3. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    Science.gov (United States)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  4. On the economic benefit of utility based estimation of a volatility model

    OpenAIRE

    Adam Clements; Annastiina Silvennoinen

    2009-01-01

    Forecasts of asset return volatility are necessary for many financial applications, including portfolio allocation. Traditionally, the parameters of econometric models used to generate volatility forecasts are estimated in a statistical setting and subsequently used in an economic setting such as portfolio allocation. Differences in the criteria under which the model is estimated and applied may inhibit reduce the overall economic benefit of a model in the context of portfolio allocation. Thi...

  5. An artificial retina for fast track finding

    International Nuclear Information System (INIS)

    Ristori, Luciano

    2000-01-01

    A new approach is proposed for fast track finding in position-sensitive detectors. The basic working principle is modeled on what is widely believed to be the low-level mechanism used by the eye to recognize straight edges. A number of receptors are tuned such that each one responds to a different range of track orientations, each track actually fires several receptors and an estimate of the orientation is obtained through interpolation. The feasibility of a practical device based on this principle and its possible implementation using currently available digital logic is discussed

  6. Co-estimation of state-of-charge, capacity and resistance for lithium-ion batteries based on a high-fidelity electrochemical model

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhang, Lei; Zhu, Jianguo; Wang, Guoxiu; Jiang, Jiuchun

    2016-01-01

    Highlights: • The numerical solution for an electrochemical model is presented. • Trinal PI observers are used to concurrently estimate SOC, capacity and resistance. • An iteration-approaching method is incorporated to enhance estimation performance. • The robustness against aging and temperature variations is experimentally verified. - Abstract: Lithium-ion batteries have been widely used as enabling energy storage in many industrial fields. Accurate modeling and state estimation play fundamental roles in ensuring safe, reliable and efficient operation of lithium-ion battery systems. A physics-based electrochemical model (EM) is highly desirable for its inherent ability to push batteries to operate at their physical limits. For state-of-charge (SOC) estimation, the continuous capacity fade and resistance deterioration are more prone to erroneous estimation results. In this paper, trinal proportional-integral (PI) observers with a reduced physics-based EM are proposed to simultaneously estimate SOC, capacity and resistance for lithium-ion batteries. Firstly, a numerical solution for the employed model is derived. PI observers are then developed to realize the co-estimation of battery SOC, capacity and resistance. The moving-window ampere-hour counting technique and the iteration-approaching method are also incorporated for the estimation accuracy improvement. The robustness of the proposed approach against erroneous initial values, different battery cell aging levels and ambient temperatures is systematically evaluated, and the experimental results verify the effectiveness of the proposed method.

  7. Ant-Based Phylogenetic Reconstruction (ABPR: A new distance algorithm for phylogenetic estimation based on ant colony optimization

    Directory of Open Access Journals (Sweden)

    Karla Vittori

    2008-12-01

    Full Text Available We propose a new distance algorithm for phylogenetic estimation based on Ant Colony Optimization (ACO, named Ant-Based Phylogenetic Reconstruction (ABPR. ABPR joins two taxa iteratively based on evolutionary distance among sequences, while also accounting for the quality of the phylogenetic tree built according to the total length of the tree. Similar to optimization algorithms for phylogenetic estimation, the algorithm allows exploration of a larger set of nearly optimal solutions. We applied the algorithm to four empirical data sets of mitochondrial DNA ranging from 12 to 186 sequences, and from 898 to 16,608 base pairs, and covering taxonomic levels from populations to orders. We show that ABPR performs better than the commonly used Neighbor-Joining algorithm, except when sequences are too closely related (e.g., population-level sequences. The phylogenetic relationships recovered at and above species level by ABPR agree with conventional views. However, like other algorithms of phylogenetic estimation, the proposed algorithm failed to recover expected relationships when distances are too similar or when rates of evolution are very variable, leading to the problem of long-branch attraction. ABPR, as well as other ACO-based algorithms, is emerging as a fast and accurate alternative method of phylogenetic estimation for large data sets.

  8. Inverse modelling for real-time estimation of radiological consequences in the early stage of an accidental radioactivity release.

    Science.gov (United States)

    Pecha, Petr; Šmídl, Václav

    2016-11-01

    A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  10. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  11. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  12. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    Science.gov (United States)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  13. Estimating animal abundance with N-mixture models using the R-INLA package for R

    KAUST Repository

    Meehan, Timothy D.

    2017-05-03

    Successful management of wildlife populations requires accurate estimates of abundance. Abundance estimates can be confounded by imperfect detection during wildlife surveys. N-mixture models enable quantification of detection probability and often produce abundance estimates that are less biased. The purpose of this study was to demonstrate the use of the R-INLA package to analyze N-mixture models and to compare performance of R-INLA to two other common approaches -- JAGS (via the runjags package), which uses Markov chain Monte Carlo and allows Bayesian inference, and unmarked, which uses Maximum Likelihood and allows frequentist inference. We show that R-INLA is an attractive option for analyzing N-mixture models when (1) familiar model syntax and data format (relative to other R packages) are desired, (2) survey level covariates of detection are not essential, (3) fast computing times are necessary (R-INLA is 10 times faster than unmarked, 300 times faster than JAGS), and (4) Bayesian inference is preferred.

  14. The estimation of future surface water bodies at Olkiluoto area based on statistical terrain and land uplift models

    Energy Technology Data Exchange (ETDEWEB)

    Pohjola, J.; Turunen, J.; Lipping, T. [Tampere Univ. of Technology (Finland); Ikonen, A.

    2014-03-15

    In this working report the modelling effort of future landscape development and surface water body formation at the modelling area in the vicinity of the Olkiluoto Island is presented. Estimation of the features of future surface water bodies is based on probabilistic terrain and land uplift models presented in previous working reports. The estimation is done using a GIS-based toolbox called UNTAMO. The future surface water bodies are estimated in 10 000 years' time span with 1000 years' intervals for the safety assessment of disposal of spent nuclear fuel at the Olkiluoto site. In the report a brief overview on the techniques used for probabilistic terrain modelling, land uplift modelling and hydrological modelling are presented first. The latter part of the report describes the results of the modelling effort. The main features of the future landscape - the four lakes forming in the vicinity of the Olkiluoto Island - are identified and the probabilistic model of the shoreline displacement is presented. The area and volume of the four lakes is modelled in a probabilistic manner. All the simulations have been performed for three scenarios two of which are based on 10 realizations of the probabilistic digital terrain model (DTM) and 10 realizations of the probabilistic land uplift model. These two scenarios differ from each other by the eustatic curve used in the land uplift model. The third scenario employs 50 realizations of the probabilistic DTM while a deterministic land uplift model, derived solely from the current land uplift rate, is used. The results indicate that the two scenarios based on the probabilistic land uplift model behave in a similar manner while the third model overestimates past and future land uplift rates. The main features of the landscape are nevertheless similar also for the third scenario. Prediction results for the volumes of the future lakes indicate that a couple of highly probably lake formation scenarios can be identified

  15. The estimation of future surface water bodies at Olkiluoto area based on statistical terrain and land uplift models

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.; Ikonen, A.

    2014-03-01

    In this working report the modelling effort of future landscape development and surface water body formation at the modelling area in the vicinity of the Olkiluoto Island is presented. Estimation of the features of future surface water bodies is based on probabilistic terrain and land uplift models presented in previous working reports. The estimation is done using a GIS-based toolbox called UNTAMO. The future surface water bodies are estimated in 10 000 years' time span with 1000 years' intervals for the safety assessment of disposal of spent nuclear fuel at the Olkiluoto site. In the report a brief overview on the techniques used for probabilistic terrain modelling, land uplift modelling and hydrological modelling are presented first. The latter part of the report describes the results of the modelling effort. The main features of the future landscape - the four lakes forming in the vicinity of the Olkiluoto Island - are identified and the probabilistic model of the shoreline displacement is presented. The area and volume of the four lakes is modelled in a probabilistic manner. All the simulations have been performed for three scenarios two of which are based on 10 realizations of the probabilistic digital terrain model (DTM) and 10 realizations of the probabilistic land uplift model. These two scenarios differ from each other by the eustatic curve used in the land uplift model. The third scenario employs 50 realizations of the probabilistic DTM while a deterministic land uplift model, derived solely from the current land uplift rate, is used. The results indicate that the two scenarios based on the probabilistic land uplift model behave in a similar manner while the third model overestimates past and future land uplift rates. The main features of the landscape are nevertheless similar also for the third scenario. Prediction results for the volumes of the future lakes indicate that a couple of highly probably lake formation scenarios can be identified with other

  16. Remaining useful life estimation based on stochastic deterioration models: A comparative study

    International Nuclear Information System (INIS)

    Le Son, Khanh; Fouladirad, Mitra; Barros, Anne; Levrat, Eric; Iung, Benoît

    2013-01-01

    Prognostic of system lifetime is a basic requirement for condition-based maintenance in many application domains where safety, reliability, and availability are considered of first importance. This paper presents a probabilistic method for prognostic applied to the 2008 PHM Conference Challenge data. A stochastic process (Wiener process) combined with a data analysis method (Principal Component Analysis) is proposed to model the deterioration of the components and to estimate the RUL on a case study. The advantages of our probabilistic approach are pointed out and a comparison with existing results on the same data is made

  17. Population-based absolute risk estimation with survey data

    Science.gov (United States)

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  18. 3-D seismic response of a base-isolated fast reactor

    International Nuclear Information System (INIS)

    Kitamura, S.; Morishita, M.; Iwata, K.

    1992-01-01

    This paper describes a 3-D response analysis methodology development and its application to a base-isolated fast breeder reactor (FBR) plant. At first, studies on application of a base-isolation system to an FBR plant were performed to identify a range of appropriate characteristics of the system. A response analysis method was developed based on mathematical models for the restoring force characteristics of several types of the systems. A series of shaking table tests using a small scale model was carried out to verify the analysis method. A good agreement was seen between the test and analysis results in terms of the horizontal and vertical responses. Parametric studies were then made to assess the effects of various factors which might be influential to the seismic response of the system. Moreover, the method was applied to evaluate three-dimensional response of the base-isolated FBR. (author)

  19. 0-d modeling of fast radiative shutdown of Tokamak discharges following massive gas injection

    International Nuclear Information System (INIS)

    Hollmann, E.M.; Parks, P.B.; Scott, H.A.

    2008-01-01

    0-D modeling of fast radiative shutdowns of tokamak discharges following massive gas injection is presented. Realistic neutral deposition rates are used together with a 1-D diffusive model to estimate impurity deposition into the plasma. Non-coronal radiation rates including opacity are used, as are induced wall currents, wall impurity radiation, and neutral and neoclassical corrections to plasma resistivity. The 0-D modeling is found to reproduce the shutdown timescale and free electron density rise seen in DIII-D argon injection experiments well. Opacity, wall currents, and wall impurities can all have a significant (>10%) impact on simulated timescales. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  20. Evaluation of Fast-Time Wake Models Using Denver 2006 Field Experiment Data

    Science.gov (United States)

    Ahmad, Nash’at N.; Pruis, Matthew J.

    2015-01-01

    The National Aeronautics and Space Administration conducted a series of wake vortex field experiments at Denver in 2003, 2005, and 2006. This paper describes the lidar wake vortex measurements and associated meteorological data collected during the 2006 deployment, and includes results of recent reprocessing of the lidar data using a new wake vortex algorithm and estimates of the atmospheric turbulence using a new algorithm to estimate eddy dissipation rate from the lidar data. The configuration and set-up of the 2006 field experiment allowed out-of-ground effect vortices to be tracked in lateral transport further than any previous campaign and thereby provides an opportunity to study long-lived wake vortices in moderate to low crosswinds. An evaluation of NASA's fast-time wake vortex transport and decay models using the dataset shows similar performance as previous studies using other field data.

  1. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    2002-09-01

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  2. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.

    Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  3. New statistical model of inelastic fast neutron scattering

    International Nuclear Information System (INIS)

    Stancicj, V.

    1975-07-01

    A new statistical model for treating the fast neutron inelastic scattering has been proposed by using the general expressions of the double differential cross section in impuls approximation. The use of the Fermi-Dirac distribution of nucleons makes it possible to derive an analytical expression of the fast neutron inelastic scattering kernel including the angular momenta coupling. The obtained values of the inelastic fast neutron cross section calculated from the derived expression of the scattering kernel are in a good agreement with the experiments. A main advantage of the derived expressions is in their simplicity for the practical calculations

  4. Simulating polar bear energetics during a seasonal fast using a mechanistic model.

    Directory of Open Access Journals (Sweden)

    Paul D Mathewson

    Full Text Available In this study we tested the ability of a mechanistic model (Niche Mapper™ to accurately model adult, non-denning polar bear (Ursus maritimus energetics while fasting during the ice-free season in the western Hudson Bay. The model uses a steady state heat balance approach, which calculates the metabolic rate that will allow an animal to maintain its core temperature in its particular microclimate conditions. Predicted weight loss for a 120 day fast typical of the 1990s was comparable to empirical studies of the population, and the model was able to reach a heat balance at the target metabolic rate for the entire fast, supporting use of the model to explore the impacts of climate change on polar bears. Niche Mapper predicted that all but the poorest condition bears would survive a 120 day fast under current climate conditions. When the fast extended to 180 days, Niche Mapper predicted mortality of up to 18% for males. Our results illustrate how environmental conditions, variation in animal properties, and thermoregulation processes may impact survival during extended fasts because polar bears were predicted to require additional energetic expenditure for thermoregulation during a 180 day fast. A uniform 3°C temperature increase reduced male mortality during a 180 day fast from 18% to 15%. Niche Mapper explicitly links an animal's energetics to environmental conditions and thus can be a valuable tool to help inform predictions of climate-related population changes. Since Niche Mapper is a generic model, it can make energetic predictions for other species threatened by climate change.

  5. Simulating polar bear energetics during a seasonal fast using a mechanistic model.

    Science.gov (United States)

    Mathewson, Paul D; Porter, Warren P

    2013-01-01

    In this study we tested the ability of a mechanistic model (Niche Mapper™) to accurately model adult, non-denning polar bear (Ursus maritimus) energetics while fasting during the ice-free season in the western Hudson Bay. The model uses a steady state heat balance approach, which calculates the metabolic rate that will allow an animal to maintain its core temperature in its particular microclimate conditions. Predicted weight loss for a 120 day fast typical of the 1990s was comparable to empirical studies of the population, and the model was able to reach a heat balance at the target metabolic rate for the entire fast, supporting use of the model to explore the impacts of climate change on polar bears. Niche Mapper predicted that all but the poorest condition bears would survive a 120 day fast under current climate conditions. When the fast extended to 180 days, Niche Mapper predicted mortality of up to 18% for males. Our results illustrate how environmental conditions, variation in animal properties, and thermoregulation processes may impact survival during extended fasts because polar bears were predicted to require additional energetic expenditure for thermoregulation during a 180 day fast. A uniform 3°C temperature increase reduced male mortality during a 180 day fast from 18% to 15%. Niche Mapper explicitly links an animal's energetics to environmental conditions and thus can be a valuable tool to help inform predictions of climate-related population changes. Since Niche Mapper is a generic model, it can make energetic predictions for other species threatened by climate change.

  6. Fast joint detection-estimation of evoked brain activity in event-related FMRI using a variational approach

    Science.gov (United States)

    Chaari, Lotfi; Vincent, Thomas; Forbes, Florence; Dojat, Michel; Ciuciu, Philippe

    2013-01-01

    In standard within-subject analyses of event-related fMRI data, two steps are usually performed separately: detection of brain activity and estimation of the hemodynamic response. Because these two steps are inherently linked, we adopt the so-called region-based Joint Detection-Estimation (JDE) framework that addresses this joint issue using a multivariate inference for detection and estimation. JDE is built by making use of a regional bilinear generative model of the BOLD response and constraining the parameter estimation by physiological priors using temporal and spatial information in a Markovian model. In contrast to previous works that use Markov Chain Monte Carlo (MCMC) techniques to sample the resulting intractable posterior distribution, we recast the JDE into a missing data framework and derive a Variational Expectation-Maximization (VEM) algorithm for its inference. A variational approximation is used to approximate the Markovian model in the unsupervised spatially adaptive JDE inference, which allows automatic fine-tuning of spatial regularization parameters. It provides a new algorithm that exhibits interesting properties in terms of estimation error and computational cost compared to the previously used MCMC-based approach. Experiments on artificial and real data show that VEM-JDE is robust to model mis-specification and provides computational gain while maintaining good performance in terms of activation detection and hemodynamic shape recovery. PMID:23096056

  7. A fast color image enhancement algorithm based on Max Intensity Channel

    Science.gov (United States)

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  8. Fast and accurate exercise policies for Bermudan swaptions in the LIBOR market model

    NARCIS (Netherlands)

    P.K. Karlsson (Patrik); S. Jain (Shashi); C.W. Oosterlee (Kees)

    2016-01-01

    htmlabstractThis paper describes an American Monte Carlo approach for obtaining fast and accurate exercise policies for pricing of callable LIBOR Exotics (e.g., Bermudan swaptions) in the LIBOR market model using the Stochastic Grid Bundling Method (SGBM). SGBM is a bundling and regression based

  9. Estimation of daily reference evapotranspiration (ETo) using artificial intelligence methods: Offering a new approach for lagged ETo data-based modeling

    Science.gov (United States)

    Mehdizadeh, Saeid

    2018-04-01

    Evapotranspiration (ET) is considered as a key factor in hydrological and climatological studies, agricultural water management, irrigation scheduling, etc. It can be directly measured using lysimeters. Moreover, other methods such as empirical equations and artificial intelligence methods can be used to model ET. In the recent years, artificial intelligence methods have been widely utilized to estimate reference evapotranspiration (ETo). In the present study, local and external performances of multivariate adaptive regression splines (MARS) and gene expression programming (GEP) were assessed for estimating daily ETo. For this aim, daily weather data of six stations with different climates in Iran, namely Urmia and Tabriz (semi-arid), Isfahan and Shiraz (arid), Yazd and Zahedan (hyper-arid) were employed during 2000-2014. Two types of input patterns consisting of weather data-based and lagged ETo data-based scenarios were considered to develop the models. Four statistical indicators including root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and mean absolute percentage error (MAPE) were used to check the accuracy of models. The local performance of models revealed that the MARS and GEP approaches have the capability to estimate daily ETo using the meteorological parameters and the lagged ETo data as inputs. Nevertheless, the MARS had the best performance in the weather data-based scenarios. On the other hand, considerable differences were not observed in the models' accuracy for the lagged ETo data-based scenarios. In the innovation of this study, novel hybrid models were proposed in the lagged ETo data-based scenarios through combination of MARS and GEP models with autoregressive conditional heteroscedasticity (ARCH) time series model. It was concluded that the proposed novel models named MARS-ARCH and GEP-ARCH improved the performance of ETo modeling compared to the single MARS and GEP. In addition, the external

  10. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  11. Model instruments of effective segmentation of the fast food market

    OpenAIRE

    Mityaeva Tetyana L.

    2013-01-01

    The article presents results of optimisation step-type calculations of economic effectiveness of promotion of fast food with consideration of key parameters of assessment of efficiency of the marketing strategy of segmentation. The article justifies development of a mathematical model on the bases of 3D-presentations and three-dimensional system of management variables. The modern applied mathematical packages allow formation not only of one-dimensional and two-dimensional arrays and analyse ...

  12. Model parameters conditioning on regional hydrologic signatures for process-based design flood estimation in ungauged basins.

    Science.gov (United States)

    Biondi, Daniela; De Luca, Davide Luciano

    2015-04-01

    The use of rainfall-runoff models represents an alternative to statistical approaches (such as at-site or regional flood frequency analysis) for design flood estimation, and constitutes an answer to the increasing need for synthetic design hydrographs (SDHs) associated to a specific return period. However, the lack of streamflow observations and the consequent high uncertainty associated with parameter estimation, usually pose serious limitations to the use of process-based approaches in ungauged catchments, which in contrast represent the majority in practical applications. This work presents the application of a Bayesian procedure that, for a predefined rainfall-runoff model, allows for the assessment of posterior parameters distribution, using the limited and uncertain information available for the response of an ungauged catchment (Bulygina et al. 2009; 2011). The use of regional estimates of river flow statistics, interpreted as hydrological signatures that measure theoretically relevant system process behaviours (Gupta et al. 2008), within this framework represents a valuable option and has shown significant developments in recent literature to constrain the plausible model response and to reduce the uncertainty in ungauged basins. In this study we rely on the first three L-moments of annual streamflow maxima, for which regressions are available from previous studies (Biondi et al. 2012; Laio et al. 2011). The methodology was carried out for a catchment located in southern Italy, and used within a Monte Carlo scheme (MCs) considering both event-based and continuous simulation approaches for design flood estimation. The applied procedure offers promising perspectives to perform model calibration and uncertainty analysis in ungauged basins; moreover, in the context of design flood estimation, process-based methods coupled with MCs approach have the advantage of providing simulated floods uncertainty analysis that represents an asset in risk-based decision

  13. Probabilistic Model-based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Anderson, Jakob; Prehn, Thomas

    2005-01-01

    is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  14. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  15. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation

    International Nuclear Information System (INIS)

    Jia Xun; Lou Yifei; Li Ruijiang; Song, William Y.; Jiang, Steve B.

    2010-01-01

    Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. Results: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of ∼360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  16. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    Science.gov (United States)

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  17. A fast and automatically paired 2-D direction-of-arrival estimation with and without estimating the mutual coupling coefficients

    Science.gov (United States)

    Filik, Tansu; Tuncer, T. Engin

    2010-06-01

    A new technique is proposed for the solution of pairing problem which is observed when fast algorithms are used for two-dimensional (2-D) direction-of-arrival (DOA) estimation. Proposed method is integrated with array interpolation for efficient use of antenna elements. Two virtual arrays are generated which are positioned accordingly with respect to the real array. ESPRIT algorithm is used by employing both the real and virtual arrays. The eigenvalues of the rotational transformation matrix have the angle information at both magnitude and phase which allows the estimation of azimuth and elevation angles by using closed-form expressions. This idea is used to obtain the paired interpolated ESPRIT algorithm which can be applied for arbitrary arrays when there is no mutual coupling. When there is mutual coupling, two approaches are proposed in order to obtain 2-D paired DOA estimates. These blind methods can be applied for the array geometries which have mutual coupling matrices with a Toeplitz structure. The first approach finds the 2-D paired DOA angles without estimating the mutual coupling coefficients. The second approach estimates the coupling coefficients and iteratively improves both the coupling coefficients and the DOA estimates. It is shown that the proposed techniques solve the pairing problem for uniform circular arrays and effectively estimate the DOA angles in case of unknown mutual coupling.

  18. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    Science.gov (United States)

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  19. Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling

    Directory of Open Access Journals (Sweden)

    Junqing Ma

    2013-05-01

    Full Text Available Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency.

  20. State of Charge Estimation Using the Extended Kalman Filter for Battery Management Systems Based on the ARX Battery Model

    Directory of Open Access Journals (Sweden)

    Hongjie Wu

    2013-01-01

    Full Text Available State of charge (SOC is a critical factor to guarantee that a battery system is operating in a safe and reliable manner. Many uncertainties and noises, such as fluctuating current, sensor measurement accuracy and bias, temperature effects, calibration errors or even sensor failure, etc. pose a challenge to the accurate estimation of SOC in real applications. This paper adds two contributions to the existing literature. First, the auto regressive exogenous (ARX model is proposed here to simulate the battery nonlinear dynamics. Due to its discrete form and ease of implemention, this straightforward approach could be more suitable for real applications. Second, its order selection principle and parameter identification method is illustrated in detail in this paper. The hybrid pulse power characterization (HPPC cycles are implemented on the 60AH LiFePO4 battery module for the model identification and validation. Based on the proposed ARX model, SOC estimation is pursued using the extended Kalman filter. Evaluation of the adaptability of the battery models and robustness of the SOC estimation algorithm are also verified. The results indicate that the SOC estimation method using the Kalman filter based on the ARX model shows great performance. It increases the model output voltage accuracy, thereby having the potential to be used in real applications, such as EVs and HEVs.

  1. A tesselation-based model for intensity estimation and laser plasma interactions calculations in three dimensions

    Science.gov (United States)

    Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.

    2018-03-01

    A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.

  2. Fasting glucose, obesity, and coronary artery calcification in community-based people without diabetes.

    Science.gov (United States)

    Rutter, Martin K; Massaro, Joseph M; Hoffmann, Udo; O'Donnell, Christopher J; Fox, Caroline S

    2012-09-01

    Our objective was to assess whether impaired fasting glucose (IFG) and obesity are independently related to coronary artery calcification (CAC) in a community-based population. We assessed CAC using multidetector computed tomography in 3,054 Framingham Heart Study participants (mean [SD] age was 50 [10] years, 49% were women, 29% had IFG, and 25% were obese) free from known vascular disease or diabetes. We tested the hypothesis that IFG (5.6-6.9 mmol/L) and obesity (BMI ≥30 kg/m(2)) were independently associated with high CAC (>90th percentile for age and sex) after adjusting for hypertension, lipids, smoking, and medication. High CAC was significantly related to IFG in an age- and sex-adjusted model (odds ratio 1.4 [95% CI 1.1-1.7], P = 0.002; referent: normal fasting glucose) and after further adjustment for obesity (1.3 [1.0-1.6], P = 0.045). However, IFG was not associated with high CAC in multivariable-adjusted models before (1.2 [0.9-1.4], P = 0.20) or after adjustment for obesity. Obesity was associated with high CAC in age- and sex-adjusted models (1.6 [1.3-2.0], P fasting glucose. In this community-based cohort, CAC was associated with obesity, but not IFG, after adjusting for important confounders. With the increasing worldwide prevalence of obesity and nondiabetic hyperglycemia, these data underscore the importance of obesity in the pathogenesis of CAC.

  3. Substitution models for overlapping technologies - an application to fast reactor deployment

    International Nuclear Information System (INIS)

    Lehtinen, R.; Silvennoinen, P.; Vira, J.

    1982-01-01

    In this paper market penetration models are discussed in the context of interacting technologies. An increased confidence credit is proposed for a technology that can draw on other overlapping technologies. The model is also reduced to a numerically tractable form. As an application, scenarios of fast reactor deployment are derived under different assumptions on the uranium and fast reactor investment costs and by varying model parameters for the penetration of fusion and solar technologies. The market share of fast reactors in electricity generation is expected to lie between zero and 40 per cent in 2050 depending on the market parameters. (orig.) [de

  4. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  5. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  6. A fast mass spring model solver for high-resolution elastic objects

    Science.gov (United States)

    Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian

    2017-03-01

    Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.

  7. Fasting time and vitamin B12 levels in a community-based population.

    Science.gov (United States)

    Orton, Dennis J; Naugler, Christopher; Sadrzadeh, S M Hossein

    2016-07-01

    Vitamin B12, also known as cobalamin (Cbl), is an essential vitamin that manifests with numerous severe but non-specific symptoms in cases of deficiency. Assessing Cbl status often requires fasting, although this requirement is not standard between institutions. This study evaluated the impact of fasting on Cbl levels in a large community-based cohort in an effort to promote standardization of Cbl testing between sites. Laboratory data for Cbl, fasting time, patient age and sex were obtained from laboratory information service from Calgary Laboratory Services (CLS) for the period of April 2011 to June 2015. CLS is the sole supplier of laboratory services in the Southern Alberta region in Canada (population, approximately 1.4 million). To investigate potential sex-specific effects of fasting on Cbl levels, males and females were analyzed separately using linear regression models. A total of 346,957 individual patient results (196,849 females, 146,085 males) were obtained. The mean plasma Cbl level was 386.5 (±195.6) pmol/L and 412.0 (±220.8) pmol/L for males and females, respectively. Linear regression analysis showed fasting had no significant association with Cbl levels in females; however a statistically significant decrease of 0.9pmol/L/hour fasting (pfasting has the potential to contribute to higher rates of Cbl deficiency in men. Together, these data suggest fasting should be excluded as a requirement for evaluating plasma Cbl. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Isozymes variability in rice mutants induced by fast neutrons and gamma rays

    International Nuclear Information System (INIS)

    Fuentes, J.L.; Alvarez, A.; Gutierrez, L.; Deus, J.E.

    2001-01-01

    The isozyme variability of a group of rice mutants induced through gamma and fast neutron (14 MeV) irradiation was studied. Polymorphisms were detected using esterase, peroxidase, polyphenol oxidase and alcohol dehydrogenase systems. The mean value of genetic similarity among the different cultivars, which arose from isozymes, was 0.75. The dendrogram was constructed based on genetic similarity matrices, designed with isozyme data using the unweighed pair group method arithmetic average (UPGMA) method. The efficiency of the UPGMA model for the estimation of genetic relationship among cultivars was supported by cophenetic correlation coefficients. Such values indicate that the distortion degree for the estimated similarities was minimal. It was found that both gamma rays and fast neutrons generated a wide range of variability which can be detected by means of isozyme patterns, even in closely related cultivars. (author)

  9. Isozymes variability in rice mutants induced by fast neutrons and gamma rays

    Energy Technology Data Exchange (ETDEWEB)

    Fuentes, J L; Alvarez, A [Centro de Estudios Aplicados al Desarrollo Nuclear (CEADEN), Miramar, Playa, Havana (Cuba); Gutierrez, L; Deus, J E [Instituto de Investigaciones del Arroz, Bauta, Havana (Cuba)

    2001-05-01

    The isozyme variability of a group of rice mutants induced through gamma and fast neutron (14 MeV) irradiation was studied. Polymorphisms were detected using esterase, peroxidase, polyphenol oxidase and alcohol dehydrogenase systems. The mean value of genetic similarity among the different cultivars, which arose from isozymes, was 0.75. The dendrogram was constructed based on genetic similarity matrices, designed with isozyme data using the unweighed pair group method arithmetic average (UPGMA) method. The efficiency of the UPGMA model for the estimation of genetic relationship among cultivars was supported by cophenetic correlation coefficients. Such values indicate that the distortion degree for the estimated similarities was minimal. It was found that both gamma rays and fast neutrons generated a wide range of variability which can be detected by means of isozyme patterns, even in closely related cultivars. (author)

  10. Estimation of a simple agent-based model of financial markets: An application to Australian stock and foreign exchange data

    Science.gov (United States)

    Alfarano, Simone; Lux, Thomas; Wagner, Friedrich

    2006-10-01

    Following Alfarano et al. [Estimation of agent-based models: the case of an asymmetric herding model, Comput. Econ. 26 (2005) 19-49; Excess volatility and herding in an artificial financial market: analytical approach and estimation, in: W. Franz, H. Ramser, M. Stadler (Eds.), Funktionsfähigkeit und Stabilität von Finanzmärkten, Mohr Siebeck, Tübingen, 2005, pp. 241-254], we consider a simple agent-based model of a highly stylized financial market. The model takes Kirman's ant process [A. Kirman, Epidemics of opinion and speculative bubbles in financial markets, in: M.P. Taylor (Ed.), Money and Financial Markets, Blackwell, Cambridge, 1991, pp. 354-368; A. Kirman, Ants, rationality, and recruitment, Q. J. Econ. 108 (1993) 137-156] of mimetic contagion as its starting point, but allows for asymmetry in the attractiveness of both groups. Embedding the contagion process into a standard asset-pricing framework, and identifying the abstract groups of the herding model as chartists and fundamentalist traders, a market with periodic bubbles and bursts is obtained. Taking stock of the availability of a closed-form solution for the stationary distribution of returns for this model, we can estimate its parameters via maximum likelihood. Expanding our earlier work, this paper presents pertinent estimates for the Australian dollar/US dollar exchange rate and the Australian stock market index. As it turns out, our model indicates dominance of fundamentalist behavior in both the stock and foreign exchange market.

  11. Optimization-Based Calibration of FAST.Farm Parameters Against SOWFA

    Energy Technology Data Exchange (ETDEWEB)

    Doubrawa Moreira, Paula [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Annoni, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jonkman, Jason [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ghate, Aditya [Stanford University

    2018-01-12

    FAST.Farm is a medium-delity wind farm modeling tool that can be used to assess power and loads contributions of wind turbines in a wind farm. The objective of this paper is to undertake a calibration procedure to set the user parameters of FAST.Farm to accurately represent results from large-eddy simulations. The results provide an in- depth analysis of the comparison of FAST.Farm and large-eddy simulations before and after calibration. The comparison of FAST.Farm and large-eddy simulation results are presented with respect to streamwise and radial velocity components as well as wake-meandering statistics (mean and standard deviation) in the lateral and vertical directions under different atmospheric and turbine operating conditions.

  12. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  13. The never ending road: improving, adapting and refining a needs-based model to estimate future general practitioner requirements in two Australian states.

    Science.gov (United States)

    Laurence, Caroline O; Heywood, Troy; Bell, Janice; Atkinson, Kaye; Karnon, Jonathan

    2018-03-27

    Health workforce planning models have been developed to estimate the future health workforce requirements for a population whom they serve and have been used to inform policy decisions. To adapt and further develop a need-based GP workforce simulation model to incorporate current and estimated geographic distribution of patients and GPs. A need-based simulation model that estimates the supply of GPs and levels of services required in South Australia (SA) was adapted and applied to the Western Australian (WA) workforce. The main outcome measure was the differences in the number of full-time equivalent (FTE) GPs supplied and required from 2013 to 2033. The base scenario estimated a shortage of GPs in WA from 2019 onwards with a shortage of 493 FTE GPs in 2033, while for SA, estimates showed an oversupply over the projection period. The WA urban and rural models estimated an urban shortage of GPs over this period. A reduced international medical graduate recruitment scenario resulted in estimated shortfalls of GPs by 2033 for WA and SA. The WA-specific scenarios of lower population projections and registrar work value resulted in a reduced shortage of FTE GPs in 2033, while unfilled training places increased the shortfall of FTE GPs in 2033. The simulation model incorporates contextual differences to its structure that allows within and cross jurisdictional comparisons of workforce estimations. It also provides greater insights into the drivers of supply and demand and the impact of changes in workforce policy, promoting more informed decision-making.

  14. A Study on Parametric Wave Estimation Based on Measured Ship Motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Iseki, Toshio

    2011-01-01

    The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics of the param......The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics...... of the parametric model are discussed by considering the results of a similar estimation concept based on Bayesian modelling. The purpose of the latter comparison is not to favour the one estimation approach to the other but rather to highlight some of the advantages and disadvantages of the two approaches....

  15. Comparison Study on Two Model-Based Adaptive Algorithms for SOC Estimation of Lithium-Ion Batteries in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yong Tian

    2014-12-01

    Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.

  16. Sparse estimation of model-based diffuse thermal dust emission

    Science.gov (United States)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  17. Core Power Control of the fast nuclear reactors with estimation of the delayed neutron precursor density using Sliding Mode method

    International Nuclear Information System (INIS)

    Ansarifar, G.R.; Nasrabadi, M.N.; Hassanvand, R.

    2016-01-01

    Highlights: • We present a S.M.C. system based on the S.M.O for control of a fast reactor power. • A S.M.O has been developed to estimate the density of delayed neutron precursor. • The stability analysis has been given by means Lyapunov approach. • The control system is guaranteed to be stable within a large range. • The comparison between S.M.C. and the conventional PID controller has been done. - Abstract: In this paper, a nonlinear controller using sliding mode method which is a robust nonlinear controller is designed to control a fast nuclear reactor. The reactor core is simulated based on the point kinetics equations and one delayed neutron group. Considering the limitations of the delayed neutron precursor density measurement, a sliding mode observer is designed to estimate it and finally a sliding mode control based on the sliding mode observer is presented. The stability analysis is given by means Lyapunov approach, thus the control system is guaranteed to be stable within a large range. Sliding Mode Control (SMC) is one of the robust and nonlinear methods which have several advantages such as robustness against matched external disturbances and parameter uncertainties. The employed method is easy to implement in practical applications and moreover, the sliding mode control exhibits the desired dynamic properties during the entire output-tracking process independent of perturbations. Simulation results are presented to demonstrate the effectiveness of the proposed controller in terms of performance, robustness and stability.

  18. Safeprops: A Software for Fast and Reliable Estimation of Safety and Environmental Properties for Organic Compounds

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens

    We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....

  19. Remaining lifetime modeling using State-of-Health estimation

    Science.gov (United States)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model

  20. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    Science.gov (United States)

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  1. A deep learning approach to estimate stress distribution: a fast and accurate surrogate of finite-element analysis.

    Science.gov (United States)

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-01-01

    Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).

  2. WALS Estimation and Forecasting in Factor-based Dynamic Models with an Application to Armenia

    OpenAIRE

    Poghosyan, Karen; Magnus, Jan R.

    2012-01-01

    Two model averaging approaches are used and compared in estimating and forecasting dynamic factor models, the well-known Bayesian model averaging (BMA) and the recently developed weighted average least squares (WALS). Both methods propose to combine frequentist estimators using Bayesian weights. We apply our framework to the Armenian economy using quarterly data from 20002010, and we estimate and forecast real GDP growth and inflation.

  3. Estimating renal function in children: a new GFR-model based on serum cystatin C and body cell mass.

    Science.gov (United States)

    Andersen, Trine Borup

    2012-07-01

    This PhD thesis is based on four individual studies including 131 children aged 2-14 years with nephro-urologic disorders. The majority (72%) of children had a normal renal function (GFR > 82 ml/min/1.73 square metres), and only 8% had a renal function thesis´ main aims were: 1) to develop a more accurate GFR model based on a novel theory of body cell mass (BCM) and cystatin C (CysC); 2) to investigate the diagnostic performance in comparison to other models as well as serum CysC and creatinine; 3) to validate the new models precision and validity. The model´s diagnostic performance was investigated in study I as the ability to detect changes in renal function (total day-to-day variation), and in study IV as the ability to discriminate between normal and reduced function. The model´s precision and validity were indirectly evaluated in study II and III, and in study I accuracy was estimated by comparison to reference GFR. Several prediction models based on CysC or a combination of CysC and serum creatinine have been developed for predicting GFR in children. Despite these efforts to improve GFR estimates, no alternative to exogenous methods has been found and the Schwartz´s formula based on height, creatinine and an empirically derived constant is still recommended for GFR estimation in children. However, the inclusion of BCM as a possible variable in a CysC-based prediction model has not yet been explored. As CysC is produced at a constant rate from all nucleated cells we hypothesize that including BCM in a new prediction model will increase accuracy of the GFR estimate. Study I aimed at deriving the new GFR-prediction model based on the novel theory of CysC and BCM and comparing the performance to previously published models. The BCM-model took the form GFR (mL/min) = 10.2 × (BCM/CysC)E 0.40 × (height × body surface area/Crea)E 0.65. The model predicted 99% within ± 30% of reference GFR, and 67% within ±10%. This was higher than any other model. The

  4. Time Domain Frequency Stability Estimation Based On FFT Measurements

    National Research Council Canada - National Science Library

    Chang, P

    2004-01-01

    .... In this paper, the biases of the Fast Fourier transform (FFT) spectral estimate with Hanning window are checked and the resulting unbiased spectral density are used to calculate the Allan variance...

  5. Coupling Hydrologic and Hydrodynamic Models to Estimate PMF

    Science.gov (United States)

    Felder, G.; Weingartner, R.

    2015-12-01

    Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.

  6. Aggregate modeling of fast-acting demand response and control under real-time pricing

    International Nuclear Information System (INIS)

    Chassin, David P.; Rondeau, Daniel

    2016-01-01

    Highlights: • Demand elasticity for fast-acting demand response load under real-time pricing. • Validated first-principles logistic demand curve matches random utility model. • Logistic demand curve suitable for diversified aggregate loads market-based transactive control systems. - Abstract: This paper develops and assesses the performance of a short-term demand response (DR) model for utility load control with applications to resource planning and control design. Long term response models tend to underestimate short-term demand response when induced by prices. This has two important consequences. First, planning studies tend to undervalue DR and often overlook its benefits in utility demand management program development. Second, when DR is not overlooked, the open-loop DR control gain estimate may be too low. This can result in overuse of load resources, control instability and excessive price volatility. Our objective is therefore to develop a more accurate and better performing short-term demand response model. We construct the model from first principles about the nature of thermostatic load control and show that the resulting formulation corresponds exactly to the Random Utility Model employed in economics to study consumer choice. The model is tested against empirical data collected from field demonstration projects and is shown to perform better than alternative models commonly used to forecast demand in normal operating conditions. The results suggest that (1) existing utility tariffs appear to be inadequate to incentivize demand response, particularly in the presence of high renewables, and (2) existing load control systems run the risk of becoming unstable if utilities close the loop on real-time prices.

  7. Fast Conceptual Cost Estimating of Aerospace Projects Using Historical Information

    Science.gov (United States)

    Butts, Glenn

    2007-01-01

    Accurate estimates can be created in less than a minute by applying powerful techniques and algorithms to create an Excel-based parametric cost model. In five easy steps you will learn how to normalize your company 's historical cost data to the new project parameters. This paper provides a complete, easy-to-understand, step by step how-to guide. Such a guide does not seem to currently exist. Over 2,000 hours of research, data collection, and trial and error, and thousands of lines of Excel Visual Basic Application (VBA) code were invested in developing these methods. While VBA is not required to use this information, it increases the power and aesthetics of the model. Implementing all of the steps described, while not required, will increase the accuracy of the results.

  8. A new license plate extraction framework based on fast mean shift

    Science.gov (United States)

    Pan, Luning; Li, Shuguang

    2010-08-01

    License plate extraction is considered to be the most crucial step of Automatic license plate recognition (ALPR) system. In this paper, a region-based license plate hybrid detection method is proposed to solve practical problems under complex background in which existing large quantity of disturbing information. In this method, coarse license plate location is carried out firstly to get the head part of a vehicle. Then a new Fast Mean Shift method based on random sampling of Kernel Density Estimate (KDE) is adopted to segment the color vehicle images, in order to get candidate license plate regions. The remarkable speed-up it brings makes Mean Shift segmentation more suitable for this application. Feature extraction and classification is used to accurately separate license plate from other candidate regions. At last, tilted license plate regulation is used for future recognition steps.

  9. Fast Rotation-Free Feature-Based Image Registration Using Improved N-SIFT and GMM-Based Parallel Optimization.

    Science.gov (United States)

    Yu, Dongdong; Yang, Feng; Yang, Caiyun; Leng, Chengcai; Cao, Jian; Wang, Yining; Tian, Jie

    2016-08-01

    Image registration is a key problem in a variety of applications, such as computer vision, medical image processing, pattern recognition, etc., while the application of registration is limited by time consumption and the accuracy in the case of large pose differences. Aimed at these two kinds of problems, we propose a fast rotation-free feature-based rigid registration method based on our proposed accelerated-NSIFT and GMM registration-based parallel optimization (PO-GMMREG). Our method is accelerated by using the GPU/CUDA programming and preserving only the location information without constructing the descriptor of each interest point, while its robustness to missing correspondences and outliers is improved by converting the interest point matching to Gaussian mixture model alignment. The accuracy in the case of large pose differences is settled by our proposed PO-GMMREG algorithm by constructing a set of initial transformations. Experimental results demonstrate that our proposed algorithm can fast rigidly register 3-D medical images and is reliable for aligning 3-D scans even when they exhibit a poor initialization.

  10. A fast dynamic mode in rare earth based glasses

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, L. Z.; Xue, R. J.; Zhu, Z. G.; Wang, W. H.; Bai, H. Y., E-mail: hybai@iphy.ac.cn [Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Ngai, K. L. [Dipartimento di Fisica, Università di Pisa, Largo B. Pontecorvo 3, I-56127 Pisa (Italy)

    2016-05-28

    Metallic glasses (MGs) usually exhibit only slow β-relaxation peak, and the signature of the fast dynamic is challenging to be observed experimentally in MGs. We report a general and unusual fast dynamic mode in a series of rare earth based MGs manifested as a distinct fast β′-relaxation peak in addition to slow β-relaxation and α-relaxation peaks. We show that the activation energy of the fast β′-relaxation is about 12RT{sub g} and is equivalent to the activation of localized flow event. The coupling of these dynamic processes as well as their relationship with glass transition and structural heterogeneity is discussed.

  11. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    International Nuclear Information System (INIS)

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated

  12. Neighborhood fast food restaurants and fast food consumption: A national study

    OpenAIRE

    Richardson, Andrea S; Boone-Heinonen, Janne; Popkin, Barry M; Gordon-Larsen, Penny

    2011-01-01

    Abstract Background Recent studies suggest that neighborhood fast food restaurant availability is related to greater obesity, yet few studies have investigated whether neighborhood fast food restaurant availability promotes fast food consumption. Our aim was to estimate the effect of neighborhood fast food availability on frequency of fast food consumption in a national sample of young adults, a population at high risk for obesity. Methods We used national data from U.S. young adults enrolled...

  13. Basic Investigations of Dynamic Travel Time Estimation Model for Traffic Signals Control Using Information from Optical Beacons

    Science.gov (United States)

    Okutani, Iwao; Mitsui, Tatsuro; Nakada, Yusuke

    In this paper put forward are neuron-type models, i.e., neural network model, wavelet neuron model and three layered wavelet neuron model(WV3), for estimating traveling time between signalized intersections in order to facilitate adaptive setting of traffic signal parameters such as green time and offset. Model validation tests using simulated data reveal that compared to other models, WV3 model works very fast in learning process and can produce more accurate estimates of travel time. Also, it is exhibited that up-link information obtainable from optical beacons, i.e., travel time observed during the former cycle time in this case, makes a crucial input variable to the models in that there isn't any substantial difference between the change of estimated and simulated travel time with the change of green time or offset when up-link information is employed as input while there appears big discrepancy between them when not employed.

  14. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    International Nuclear Information System (INIS)

    2010-01-01

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM and FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the

  15. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    Energy Technology Data Exchange (ETDEWEB)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the

  16. A postprocessing method based on high-resolution spectral estimation for FDTD calculation of phononic band structures

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Li Jianbao; Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2010-05-15

    If the energy bands of a phononic crystal are calculated by the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT), good estimation of the eigenfrequencies can only be ensured by the postprocessing of sufficiently long time series generated by a large number of FDTD iterations. In this paper, a postprocessing method based on the high-resolution spectral estimation via the Yule-Walker method is proposed to overcome this difficulty. Numerical simulation results for three-dimensional acoustic and two-dimensional elastic systems show that, compared with the classic FFT-based postprocessing method, the proposed method can give much better estimation of the eigenfrequencies when the FDTD is run with relatively few iterations.

  17. A postprocessing method based on high-resolution spectral estimation for FDTD calculation of phononic band structures

    International Nuclear Information System (INIS)

    Su Xiaoxing; Li Jianbao; Wang Yuesheng

    2010-01-01

    If the energy bands of a phononic crystal are calculated by the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT), good estimation of the eigenfrequencies can only be ensured by the postprocessing of sufficiently long time series generated by a large number of FDTD iterations. In this paper, a postprocessing method based on the high-resolution spectral estimation via the Yule-Walker method is proposed to overcome this difficulty. Numerical simulation results for three-dimensional acoustic and two-dimensional elastic systems show that, compared with the classic FFT-based postprocessing method, the proposed method can give much better estimation of the eigenfrequencies when the FDTD is run with relatively few iterations.

  18. WALS estimation and forecasting in factor-based dynamic models with an application to Armenia

    NARCIS (Netherlands)

    Poghosyan, K.; Magnus, J.R.

    2012-01-01

    Two model averaging approaches are used and compared in estimating and forecasting dynamic factor models, the well-known Bayesian model averaging (BMA) and the recently developed weighted average least squares (WALS). Both methods propose to combine frequentist estimators using Bayesian weights. We

  19. BN800: The advanced sodium cooled fast reactor plant based on close fuel cycle

    International Nuclear Information System (INIS)

    Wu Xingman

    2011-01-01

    As one of the advanced countries with actually fastest reactor technology, Russia has always taken a leading role in the forefront of the development of fast reactor technology. After successful operation of BN600 fast reactor nuclear power station with a capacity of six hundred thousand kilowatts of electric power for nearly 30 years, and after a few decades of several design optimization improved and completed on its basis, it is finally decided to build Unit 4 of Beloyarsk nuclear power station (BN800 fast reactor power station). The BN800 fast reactor nuclear power station is considered to be the project of the world's most advanced fast reactor nuclear power being put into implementation. The fast reactor technology in China has been developed for decades. With the Chinese pilot fast reactor to be put into operation soon, the Chinese model fast reactor power station has been put on the agenda. Meanwhile, the closed fuel cycle development strategy with fast reactor as key aspect has given rise to the concern of experts and decision-making level in relevant areas. Based on the experiences accumulated in many years in dealing the Sino-Russian cooperation in fast reactor technology, with reference to the latest Russian published and authoritative literatures regarding BN800 fast reactor nuclear power station, the author compiled this article into a comprehensive introduction for reference by leaders and experts dealing in the related fields of nuclear fuel cycle strategy and fast reactor technology development researches, etc. (authors)

  20. Development of models for fast fluid pathways through unsaturated heterogeneous porous media

    International Nuclear Information System (INIS)

    Robey, T.H.

    1994-11-01

    The pre-waste-emplacement ground water travel time requirement is a regulatory criterion that specifies ground water travel time to the accessible environment shall be greater than 1,000 years. Satisfying the ground water travel time criterion for the potential repository at Yucca Mountain requires the study of fast travel path formation in the unsaturated zone and development of models that simulate the formation of fast paths. Conceptual models for unsaturated flow that have been used for total-systems performance assessment generally fall into the categories of composite-porosity or fracture models. The actual hydrologic conditions at Yucca Mountain are thought to lie somewhere between the extremes of these two types of models. The current study considers the effects of heterogeneities on composite-porosity models and seeks to develop numerical methods (and models) that can produce locally saturated zones where fracture flow can occur. The credibility of the model and numerical methods is investigated by using test data from the INTRAVAL project (Swedish Nuclear Inspectorate, 1992) to attempt to predict in-situ volumetric water content at specific locations in Yucca Mountain. Work based on the numerical methods presented in this study is eventually intended to allow the calculation of ground water travel times in heterogeneous media. 60 refs

  1. Conditional shape models for cardiac motion estimation

    DEFF Research Database (Denmark)

    Metz, Coert; Baka, Nora; Kirisli, Hortense

    2010-01-01

    We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...

  2. A fuel-based approach to estimating motor vehicle exhaust emissions

    Science.gov (United States)

    Singer, Brett Craig

    Motor vehicles contribute significantly to air pollution problems; accurate motor vehicle emission inventories are therefore essential to air quality planning. Current travel-based inventory models use emission factors measured from potentially biased vehicle samples and predict fleet-average emissions which are often inconsistent with on-road measurements. This thesis presents a fuel-based inventory approach which uses emission factors derived from remote sensing or tunnel-based measurements of on-road vehicles. Vehicle activity is quantified by statewide monthly fuel sales data resolved to the air basin level. Development of the fuel-based approach includes (1) a method for estimating cold start emission factors, (2) an analysis showing that fuel-normalized emission factors are consistent over a range of positive vehicle loads and that most fuel use occurs during loaded-mode driving, (3) scaling factors relating infrared hydrocarbon measurements to total exhaust volatile organic compound (VOC) concentrations, and (4) an analysis showing that economic factors should be considered when selecting on-road sampling sites. The fuel-based approach was applied to estimate carbon monoxide (CO) emissions from warmed-up vehicles in the Los Angeles area in 1991, and CO and VOC exhaust emissions for Los Angeles in 1997. The fuel-based CO estimate for 1991 was higher by a factor of 2.3 +/- 0.5 than emissions predicted by California's MVEI 7F model. Fuel-based inventory estimates for 1997 were higher than those of California's updated MVEI 7G model by factors of 2.4 +/- 0.2 for CO and 3.5 +/- 0.6 for VOC. Fuel-based estimates indicate a 20% decrease in the mass of CO emitted, despite an 8% increase in fuel use between 1991 and 1997; official inventory models predict a 50% decrease in CO mass emissions during the same period. Cold start CO and VOC emission factors derived from parking garage measurements were lower than those predicted by the MVEI 7G model. Current inventories

  3. Response-based estimation of sea state parameters - Influence of filtering

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    Reliable estimation of the on-site sea state parameters is essential to decision support systems for safe navigation of ships. The wave spectrum can be estimated from procedures based on measured ship responses. The paper deals with two procedures—Bayesian Modelling and Parametric Modelling...

  4. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  5. A Geometrical-Based Model for Cochannel Interference Analysis and Capacity Estimation of CDMA Cellular Systems

    Directory of Open Access Journals (Sweden)

    Konstantinos B. Baltzis

    2008-10-01

    Full Text Available A common assumption in cellular communications is the circular-cell approximation. In this paper, an alternative analysis based on the hexagonal shape of the cells is presented. A geometrical-based stochastic model is proposed to describe the angle of arrival of the interfering signals in the reverse link of a cellular system. Explicit closed form expressions are derived, and simulations performed exhibit the characteristics and validate the accuracy of the proposed model. Applications in the capacity estimation of WCDMA cellular networks are presented. Dependence of system capacity of the sectorization of the cells and the base station antenna radiation pattern is explored. Comparisons with data in literature validate the accuracy of the proposed model. The degree of error of the hexagonal and the circular-cell approaches has been investigated indicating the validity of the proposed model. Results have also shown that, in many cases, the two approaches give similar results when the radius of the circle equals to the hexagon inradius. A brief discussion on how the proposed technique may be applied to broadband access networks is finally made.

  6. Vehicle Sideslip Angle Estimation Based on Hybrid Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jing Li

    2016-01-01

    Full Text Available Vehicle sideslip angle is essential for active safety control systems. This paper presents a new hybrid Kalman filter to estimate vehicle sideslip angle based on the 3-DoF nonlinear vehicle dynamic model combined with Magic Formula tire model. The hybrid Kalman filter is realized by combining square-root cubature Kalman filter (SCKF, which has quick convergence and numerical stability, with square-root cubature based receding horizon Kalman FIR filter (SCRHKF, which has robustness against model uncertainty and temporary noise. Moreover, SCKF and SCRHKF work in parallel, and the estimation outputs of two filters are merged by interacting multiple model (IMM approach. Experimental results show the accuracy and robustness of the hybrid Kalman filter.

  7. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  8. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    Directory of Open Access Journals (Sweden)

    Michael A. Guthrie

    2013-01-01

    Full Text Available limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment. For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.

  9. Fast-track Orthognathic Surgery: An Evidence-based Review

    Science.gov (United States)

    Otero, Joel Joshi; Detriche, Olivier; Mommaerts, Maurice Yves

    2017-01-01

    The aim of this study was to establish a fast-track protocol for bimaxillary orthognathic surgery (OGS). Fast-track surgery (FTS) is a multidisciplinary approach where the pre-, intra-, and postoperative management is focusing maximally on a quick patient recovery and early discharge. To enable this, the patients’ presurgical stress and postsurgical discomfort should be maximally reduced. Both referral patterns and expenses within the health-care system are positively influenced by FTS. University hospital-literature review through Medline, Embase, and the Cochrane Library (January 2000–July 2016) using the following words – “fast track, enhanced recovery, multimodal, and perioperative care” – to define a protocol evidence based for OGS, as well as evidenced-based medicine search of every term added to the protocol during the same period. The process has resulted in an OGS protocol that may improve the outcome of the patient through several nonoperative and operative measures such as preoperative patient education and intra/postoperative measures that should improve overall patient satisfaction, decrease morbidity such as postoperative nausea, headache, dizziness, pain, and intubation discomfort, and shorten hospital stay. A literature review allowed us to fine-tune a fast-track protocol for uncomplicated OGS that can be prospectively studied against currently applied ones. PMID:29264281

  10. A model-based adaptive state of charge estimator for a lithium-ion battery using an improved adaptive particle filter

    International Nuclear Information System (INIS)

    Ye, Min; Guo, Hui; Cao, Binggang

    2017-01-01

    Highlights: • Propose an improved adaptive particle swarm filter method. • The SoC estimation method for the battery based on the adaptive particle swarm filter is presented. • The algorithm is validated by the case study of different aged extent batteries. • The effectiveness and applicability of the algorithm are validated by the LiPB batteries. - Abstract: Obtaining accurate parameters, state of charge (SoC) and capacity of a lithium-ion battery is crucial for a battery management system, and establishing a battery model online is complex. In addition, the errors and perturbations of the battery model dramatically increase throughout the battery lifetime, making it more challenging to model the battery online. To overcome these difficulties, this paper provides three contributions: (1) To improve the robustness of the adaptive particle filter algorithm, an error analysis method is added to the traditional adaptive particle swarm algorithm. (2) An online adaptive SoC estimator based on the improved adaptive particle filter is presented; this estimator can eliminate the estimation error due to battery degradation and initial SoC errors. (3) The effectiveness of the proposed method is verified using various initial states of lithium nickel manganese cobalt oxide (NMC) cells and lithium-ion polymer (LiPB) batteries. The experimental analysis shows that the maximum errors are less than 1% for both the voltage and SoC estimations and that the convergence time of the SoC estimation decreased to 120 s.

  11. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  12. Fast Multipole-Based Elliptic PDE Solver and Preconditioner

    KAUST Repository

    Ibeid, Huda

    2016-12-07

    Exascale systems are predicted to have approximately one billion cores, assuming Gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle-based methods in astrophysics and molecular dynamics. FMM is more than an N-body solver, however. Recent efforts to view the FMM as an elliptic PDE solver have opened the possibility to use it as a preconditioner for even a broader range of applications. In this thesis, we (i) discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on inter-node communication, and develop a performance model that considers the communication patterns of the FMM for spatially quasi-uniform distributions, (ii) employ this performance model to guide performance and scaling improvement of FMM for all-atom molecular dynamics simulations of uniformly distributed particles, and (iii) demonstrate that, beyond its traditional use as a solver in problems for which explicit free-space kernel representations are available, the FMM has applicability as a preconditioner in finite domain elliptic boundary value problems, by equipping it with boundary integral capability for satisfying conditions at finite boundaries and by wrapping it in a Krylov method for extensibility to more general operators. Compared with multilevel methods, FMM is capable of comparable algebraic convergence rates down to the truncation error of the discretized PDE, and it has superior multicore and distributed memory scalability properties on commodity

  13. Factors Affecting the Consumption of Fast Foods Among Women Based on the Social Cognitive Theory

    Directory of Open Access Journals (Sweden)

    Nooshin Beiranvandpour

    2014-06-01

    Full Text Available Introduction: Fast-food consumption among Iranian families appears to be increasing probably due to urbanization, popularization of western-style diets and increased women's labor force participation. Few theory-based investigations have assessed the determinants of fast food consumption. Therefore, the aim of this study was to determine the predictors of fast food consumption, based on the social cognitive theory (SCT among women referred to health centers in Hamadan, West of Iran. Materials and Methods: This cross-sectional study was conducted using structured self-administered questionnaires on 384 women referred to 10 health centers in Hamadan city, Western of Iran. Health center was considered as a sampling unit and systematic random sampling method was applied to select health centers. Participants filled a questionnaire containing SCT constructs, an eight-item food frequency questionnaire, and demographic characteristics. Data was analyzed by independent T-test, one-way ANOVA, and multiple linear regression using SPSS-16. Results: The model could explain 21% of the variance in frequency of fast food consumption. Outcome expectations (p=0.04 and availability (p< 0.001 were the significant predictors. The career status of women was the only related demographic characteristic (p< 0.001. Conclusion: Interventions aimed to change outcome expectations and introducing nutritious alternatives to fast food could be promising to decrease the rate of fast-food consumption.

  14. A fast fiducial marker tracking model for fully automatic alignment in electron tomography

    KAUST Repository

    Han, Renmin; Zhang, Fa; Gao, Xin

    2017-01-01

    Automatic alignment, especially fiducial marker-based alignment, has become increasingly important due to the high demand of subtomogram averaging and the rapid development of large-field electron microscopy. Among the alignment steps, fiducial marker tracking is a crucial one that determines the quality of the final alignment. Yet, it is still a challenging problem to track the fiducial markers accurately and effectively in a fully automatic manner.In this paper, we propose a robust and efficient scheme for fiducial marker tracking. Firstly, we theoretically prove the upper bound of the transformation deviation of aligning the positions of fiducial markers on two micrographs by affine transformation. Secondly, we design an automatic algorithm based on the Gaussian mixture model to accelerate the procedure of fiducial marker tracking. Thirdly, we propose a divide-and-conquer strategy against lens distortions to ensure the reliability of our scheme. To our knowledge, this is the first attempt that theoretically relates the projection model with the tracking model. The real-world experimental results further support our theoretical bound and demonstrate the effectiveness of our algorithm. This work facilitates the fully automatic tracking for datasets with a massive number of fiducial markers.The C/C ++ source code that implements the fast fiducial marker tracking is available at https://github.com/icthrm/gmm-marker-tracking. Markerauto 1.6 version or later (also integrated in the AuTom platform at http://ear.ict.ac.cn/) offers a complete implementation for fast alignment, in which fast fiducial marker tracking is available by the

  15. A fast fiducial marker tracking model for fully automatic alignment in electron tomography

    KAUST Repository

    Han, Renmin

    2017-10-20

    Automatic alignment, especially fiducial marker-based alignment, has become increasingly important due to the high demand of subtomogram averaging and the rapid development of large-field electron microscopy. Among the alignment steps, fiducial marker tracking is a crucial one that determines the quality of the final alignment. Yet, it is still a challenging problem to track the fiducial markers accurately and effectively in a fully automatic manner.In this paper, we propose a robust and efficient scheme for fiducial marker tracking. Firstly, we theoretically prove the upper bound of the transformation deviation of aligning the positions of fiducial markers on two micrographs by affine transformation. Secondly, we design an automatic algorithm based on the Gaussian mixture model to accelerate the procedure of fiducial marker tracking. Thirdly, we propose a divide-and-conquer strategy against lens distortions to ensure the reliability of our scheme. To our knowledge, this is the first attempt that theoretically relates the projection model with the tracking model. The real-world experimental results further support our theoretical bound and demonstrate the effectiveness of our algorithm. This work facilitates the fully automatic tracking for datasets with a massive number of fiducial markers.The C/C ++ source code that implements the fast fiducial marker tracking is available at https://github.com/icthrm/gmm-marker-tracking. Markerauto 1.6 version or later (also integrated in the AuTom platform at http://ear.ict.ac.cn/) offers a complete implementation for fast alignment, in which fast fiducial marker tracking is available by the

  16. Pitchcontrol of wind turbines using model free adaptivecontrol based on wind turbine code

    DEFF Research Database (Denmark)

    Zhang, Yunqian; Chen, Zhe; Cheng, Ming

    2011-01-01

    value is only based on I/O data of the wind turbine is identified and then the wind turbine system is replaced by a dynamic linear time-varying model. In order to verify the correctness and robustness of the proposed model free adaptive pitch controller, the wind turbine code FAST which can predict......As the wind turbine is a nonlinear high-order system, to achieve good pitch control performance, model free adaptive control (MFAC) approach which doesn't need the mathematical model of the wind turbine is adopted in the pitch control system in this paper. A pseudo gradient vector whose estimation...... the wind turbine loads and response in high accuracy is used. The results show that the controller produces good dynamic performance, good robustness and adaptability....

  17. Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM

    Science.gov (United States)

    Sheng, Hanlin; Zhang, Tianhong

    2017-08-01

    In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.

  18. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    Science.gov (United States)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  19. A fast infrared radiative transfer model based on the adding-doubling method for hyperspectral remote-sensing applications

    International Nuclear Information System (INIS)

    Zhang Zhibo; Yang Ping; Kattawar, George; Huang, H.-L.; Greenwald, Thomas; Li Jun; Baum, Bryan A.; Zhou, Daniel K.; Hu Yongxiang

    2007-01-01

    A fast infrared radiative transfer (RT) model is developed on the basis of the adding-doubling principle, hereafter referred to as FIRTM-AD, to facilitate the forward RT simulations involved in hyperspectral remote-sensing applications under cloudy-sky conditions. A pre-computed look-up table (LUT) of the bidirectional reflection and transmission functions and emissivities of ice clouds in conjunction with efficient interpolation schemes is used in FIRTM-AD to alleviate the computational burden of the doubling process. FIRTM-AD is applicable to a variety of cloud conditions, including vertically inhomogeneous or multilayered clouds. In particular, this RT model is suitable for the computation of high-spectral-resolution radiance and brightness temperature (BT) spectra at both the top-of-atmosphere and surface, and thus is useful for satellite and ground-based hyperspectral sensors. In terms of computer CPU time, FIRTM-AD is approximately 100-250 times faster than the well-known discrete-ordinate (DISORT) RT model for the same conditions. The errors of FIRTM-AD, specified as root-mean-square (RMS) BT differences with respect to their DISORT counterparts, are generally smaller than 0.1 K

  20. Fatigue Damage Estimation and Data-based Control for Wind Turbines

    DEFF Research Database (Denmark)

    Barradas Berglind, Jose de Jesus; Wisniewski, Rafal; Soltani, Mohsen

    2015-01-01

    based on hysteresis operators, which can be used in control loops. The authors propose a data-based model predictive control (MPC) strategy that incorporates an online fatigue estimation method through the objective function, where the ultimate goal in mind is to reduce the fatigue damage of the wind......The focus of this work is on fatigue estimation and data-based controller design for wind turbines. The main purpose is to include a model of the fatigue damage of the wind turbine components in the controller design and synthesis process. This study addresses an online fatigue estimation method...... turbine components. The outcome is an adaptive or self-tuning MPC strategy for wind turbine fatigue damage reduction, which relies on parameter identification on previous measurement data. The results of the proposed strategy are compared with a baseline model predictive controller....