WorldWideScience

Sample records for fast model-based estimation

  1. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  2. Fast fundamental frequency estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2017-01-01

    Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...

  3. A model-based approach to estimating forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2006-01-01

    A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...

  4. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  5. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  6. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  7. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  8. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  9. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  10. Fast and Statistically Efficient Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2016-01-01

    Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...... a recursive solver. Via benchmarks, we demonstrate that the computation time is reduced by approximately two orders of magnitude. The proposed fast algorithm is available for download online....

  11. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  12. Design of Model-based Controller with Disturbance Estimation in Steer-by-wire System

    Directory of Open Access Journals (Sweden)

    Jung Sanghun

    2018-01-01

    Full Text Available The steer-by-wire system is a next generation steering control technology that has been actively studied because it has many advantages such as fast response, space efficiency due to removal of redundant mechanical elements, and high connectivity with vehicle chassis control, such as active steering. Steer-by-wire system has disturbance composed of tire friction torque and self-aligning torque. These disturbances vary widely due to the weight or friction coefficient change. Therefore, disturbance compensation logic is strongly required to obtain desired performance. This paper proposes model-based controller with disturbance compensation to achieve the robust control performance. Targeted steer-by-wire system is identified through the experiment and system identification method. Moreover, model-based controller is designed using the identified plant model. Disturbance of targeted steer-by-wire is estimated using disturbance observer(DOB, and compensate the estimated disturbance into control input. Experiment of various scenarios are conducted to validate the robust performance of proposed model-based controller.

  13. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  14. Sparse estimation of model-based diffuse thermal dust emission

    Science.gov (United States)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  15. Model-Based Estimation of Ankle Joint Stiffness

    Directory of Open Access Journals (Sweden)

    Berno J. E. Misgeld

    2017-03-01

    Full Text Available We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  16. Model-Based Estimation of Ankle Joint Stiffness.

    Science.gov (United States)

    Misgeld, Berno J E; Zhang, Tony; Lüken, Markus J; Leonhardt, Steffen

    2017-03-29

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model's inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  17. Model-Based Estimation of Ankle Joint Stiffness

    Science.gov (United States)

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  18. Model-based state estimator for an intelligent tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A. P.; Schmeitz, A. J.C.; Besselink, I.; Nijmeijer, H.

    2017-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  19. Model-based State Estimator for an Intelligent Tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A.P.; Schmeitz, A.J.C.; Besselink, I.J.M.; Nijmeijer, H.

    2016-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  20. Data Sources for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  1. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  2. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    Science.gov (United States)

    Perez, Hector Eduardo

    notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro

  3. Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Zhongyue Zou

    2014-08-01

    Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.

  4. Methodology for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.

  5. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  6. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  7. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  8. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    Science.gov (United States)

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  9. Model-based estimation with boundary side information or boundary regularization

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Fessler, J.A.; Clinthorne, N.H.; Hero, A.O.

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (Emission Computed Tomography). The authors have also reported difficulties with boundary estimation in low contrast and low count rate situations. In this paper, the authors propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, the authors introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. The authors implement boundary regularization through formulating a penalized log-likelihood function. The authors also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information

  10. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  11. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    Science.gov (United States)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  12. Model-Based Load Estimation for Predictive Condition Monitoring of Wind Turbines

    DEFF Research Database (Denmark)

    Perisic, Nevena; Pederen, Bo Juul; Grunnet, Jacob Deleuran

    signal is performed online, and a Load Indicator Signal (LIS) is formulated as a ratio between current estimated accumulated fatigue loads and its expected value based only on a priori knowledge (WTG dynamics and wind climate). LOT initialisation is based on a priori knowledge and can be obtained using...... programme for pre-maintenance actions. The performance of LOT is demonstrated by applying it to one of the most critical WTG components, the gearbox. Model-based load CMS for gearbox requires only standard WTG SCADA data. Direct measuring of gearbox fatigue loads requires high cost and low reliability...... measurement equipment. Thus, LOT can significantly reduce the price of load monitoring....

  13. CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-04

    Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.

  14. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  15. Model-Based Evolution of a Fast Hybrid Fuzzy Adaptive Controller for a Pneumatic Muscle Actuator

    Directory of Open Access Journals (Sweden)

    Alexander Hošovský

    2012-07-01

    Full Text Available Pneumatic artificial muscle-based robotic systems usually necessitate the use of various nonlinear control techniques in order to improve their performance. Their robustness to parameter variation, which is generally difficult to predict, should also be tested. Here a fast hybrid adaptive control is proposed, where a conventional PD controller is placed into the feedforward branch and a fuzzy controller is placed into the adaptation branch. The fuzzy controller compensates for the actions of the PD controller under conditions of inertia moment variation. The fuzzy controller of Takagi-Sugeno type is evolved through a genetic algorithm using the dynamic model of a pneumatic muscle actuator. The results confirm the capability of the designed system to provide robust performance under the conditions of varying inertia.

  16. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    Directory of Open Access Journals (Sweden)

    Manuel Gil

    2014-09-01

    Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  17. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    Science.gov (United States)

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  18. Uncertainties in neural network model based on carbon dioxide concentration for occupancy estimation

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Azimil Gani; Rahman, Haolia; Kim, Jung-Kyung; Han, Hwataik [Kookmin University, Seoul (Korea, Republic of)

    2017-05-15

    Demand control ventilation is employed to save energy by adjusting airflow rate according to the ventilation load of a building. This paper investigates a method for occupancy estimation by using a dynamic neural network model based on carbon dioxide concentration in an occupied zone. The method can be applied to most commercial and residential buildings where human effluents to be ventilated. An indoor simulation program CONTAMW is used to generate indoor CO{sub 2} data corresponding to various occupancy schedules and airflow patterns to train neural network models. Coefficients of variation are obtained depending on the complexities of the physical parameters as well as the system parameters of neural networks, such as the numbers of hidden neurons and tapped delay lines. We intend to identify the uncertainties caused by the model parameters themselves, by excluding uncertainties in input data inherent in measurement. Our results show estimation accuracy is highly influenced by the frequency of occupancy variation but not significantly influenced by fluctuation in the airflow rate. Furthermore, we discuss the applicability and validity of the present method based on passive environmental conditions for estimating occupancy in a room from the viewpoint of demand control ventilation applications.

  19. A FAST SEGMENTATION ALGORITHM FOR C-V MODEL BASED ON EXPONENTIAL IMAGE SEQUENCE GENERATION

    Directory of Open Access Journals (Sweden)

    J. Hu

    2017-09-01

    Full Text Available For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1 the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2 the initial value of SDF (Signal Distance Function and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3 the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  20. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    Science.gov (United States)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  1. A Fast LMMSE Channel Estimation Method for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Zhou Wen

    2009-01-01

    Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.

  2. Burden of Severe Pneumonia, Pneumococcal Pneumonia and Pneumonia Deaths in Indian States: Modelling Based Estimates.

    Science.gov (United States)

    Farooqui, Habib; Jit, Mark; Heymann, David L; Zodpey, Sanjay

    2015-01-01

    The burden of severe pneumonia in terms of morbidity and mortality is unknown in India especially at sub-national level. In this context, we aimed to estimate the number of severe pneumonia episodes, pneumococcal pneumonia episodes and pneumonia deaths in children younger than 5 years in 2010. We adapted and parameterized a mathematical model based on the epidemiological concept of potential impact fraction developed CHERG for this analysis. The key parameters that determine the distribution of severe pneumonia episode across Indian states were state-specific under-5 population, state-specific prevalence of selected definite pneumonia risk factors and meta-estimates of relative risks for each of these risk factors. We applied the incidence estimates and attributable fraction of risk factors to population estimates for 2010 of each Indian state. We then estimated the number of pneumococcal pneumonia cases by applying the vaccine probe methodology to an existing trial. We estimated mortality due to severe pneumonia and pneumococcal pneumonia by combining incidence estimates with case fatality ratios from multi-centric hospital-based studies. Our results suggest that in 2010, 3.6 million (3.3-3.9 million) episodes of severe pneumonia and 0.35 million (0.31-0.40 million) all cause pneumonia deaths occurred in children younger than 5 years in India. The states that merit special mention include Uttar Pradesh where 18.1% children reside but contribute 24% of pneumonia cases and 26% pneumonia deaths, Bihar (11.3% children, 16% cases, 22% deaths) Madhya Pradesh (6.6% children, 9% cases, 12% deaths), and Rajasthan (6.6% children, 8% cases, 11% deaths). Further, we estimated that 0.56 million (0.49-0.64 million) severe episodes of pneumococcal pneumonia and 105 thousand (92-119 thousand) pneumococcal deaths occurred in India. The top contributors to India's pneumococcal pneumonia burden were Uttar Pradesh, Bihar, Madhya Pradesh and Rajasthan in that order. Our results

  3. Burden of Severe Pneumonia, Pneumococcal Pneumonia and Pneumonia Deaths in Indian States: Modelling Based Estimates.

    Directory of Open Access Journals (Sweden)

    Habib Farooqui

    Full Text Available The burden of severe pneumonia in terms of morbidity and mortality is unknown in India especially at sub-national level. In this context, we aimed to estimate the number of severe pneumonia episodes, pneumococcal pneumonia episodes and pneumonia deaths in children younger than 5 years in 2010. We adapted and parameterized a mathematical model based on the epidemiological concept of potential impact fraction developed CHERG for this analysis. The key parameters that determine the distribution of severe pneumonia episode across Indian states were state-specific under-5 population, state-specific prevalence of selected definite pneumonia risk factors and meta-estimates of relative risks for each of these risk factors. We applied the incidence estimates and attributable fraction of risk factors to population estimates for 2010 of each Indian state. We then estimated the number of pneumococcal pneumonia cases by applying the vaccine probe methodology to an existing trial. We estimated mortality due to severe pneumonia and pneumococcal pneumonia by combining incidence estimates with case fatality ratios from multi-centric hospital-based studies. Our results suggest that in 2010, 3.6 million (3.3-3.9 million episodes of severe pneumonia and 0.35 million (0.31-0.40 million all cause pneumonia deaths occurred in children younger than 5 years in India. The states that merit special mention include Uttar Pradesh where 18.1% children reside but contribute 24% of pneumonia cases and 26% pneumonia deaths, Bihar (11.3% children, 16% cases, 22% deaths Madhya Pradesh (6.6% children, 9% cases, 12% deaths, and Rajasthan (6.6% children, 8% cases, 11% deaths. Further, we estimated that 0.56 million (0.49-0.64 million severe episodes of pneumococcal pneumonia and 105 thousand (92-119 thousand pneumococcal deaths occurred in India. The top contributors to India's pneumococcal pneumonia burden were Uttar Pradesh, Bihar, Madhya Pradesh and Rajasthan in that order. Our

  4. Burden of Severe Pneumonia, Pneumococcal Pneumonia and Pneumonia Deaths in Indian States: Modelling Based Estimates

    Science.gov (United States)

    Farooqui, Habib; Jit, Mark; Heymann, David L.; Zodpey, Sanjay

    2015-01-01

    The burden of severe pneumonia in terms of morbidity and mortality is unknown in India especially at sub-national level. In this context, we aimed to estimate the number of severe pneumonia episodes, pneumococcal pneumonia episodes and pneumonia deaths in children younger than 5 years in 2010. We adapted and parameterized a mathematical model based on the epidemiological concept of potential impact fraction developed CHERG for this analysis. The key parameters that determine the distribution of severe pneumonia episode across Indian states were state-specific under-5 population, state-specific prevalence of selected definite pneumonia risk factors and meta-estimates of relative risks for each of these risk factors. We applied the incidence estimates and attributable fraction of risk factors to population estimates for 2010 of each Indian state. We then estimated the number of pneumococcal pneumonia cases by applying the vaccine probe methodology to an existing trial. We estimated mortality due to severe pneumonia and pneumococcal pneumonia by combining incidence estimates with case fatality ratios from multi-centric hospital-based studies. Our results suggest that in 2010, 3.6 million (3.3–3.9 million) episodes of severe pneumonia and 0.35 million (0.31–0.40 million) all cause pneumonia deaths occurred in children younger than 5 years in India. The states that merit special mention include Uttar Pradesh where 18.1% children reside but contribute 24% of pneumonia cases and 26% pneumonia deaths, Bihar (11.3% children, 16% cases, 22% deaths) Madhya Pradesh (6.6% children, 9% cases, 12% deaths), and Rajasthan (6.6% children, 8% cases, 11% deaths). Further, we estimated that 0.56 million (0.49–0.64 million) severe episodes of pneumococcal pneumonia and 105 thousand (92–119 thousand) pneumococcal deaths occurred in India. The top contributors to India’s pneumococcal pneumonia burden were Uttar Pradesh, Bihar, Madhya Pradesh and Rajasthan in that order. Our

  5. Direct diffusion tensor estimation using a model-based method with spatial and parametric constraints.

    Science.gov (United States)

    Zhu, Yanjie; Peng, Xi; Wu, Yin; Wu, Ed X; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2017-02-01

    To develop a new model-based method with spatial and parametric constraints (MB-SPC) aimed at accelerating diffusion tensor imaging (DTI) by directly estimating the diffusion tensor from highly undersampled k-space data. The MB-SPC method effectively incorporates the prior information on the joint sparsity of different diffusion-weighted images using an L1-L2 norm and the smoothness of the diffusion tensor using a total variation seminorm. The undersampled k-space datasets were obtained from fully sampled DTI datasets of a simulated phantom and an ex-vivo experimental rat heart with acceleration factors ranging from 2 to 4. The diffusion tensor was directly reconstructed by solving a minimization problem with a nonlinear conjugate gradient descent algorithm. The reconstruction performance was quantitatively assessed using the normalized root mean square error (nRMSE) of the DTI indices. The MB-SPC method achieves acceptable DTI measures at an acceleration factor up to 4. Experimental results demonstrate that the proposed method can estimate the diffusion tensor more accurately than most existing methods operating at higher net acceleration factors. The proposed method can significantly reduce artifact, particularly at higher acceleration factors or lower SNRs. This method can easily be adapted to MR relaxometry parameter mapping and is thus useful in the characterization of biological tissue such as nerves, muscle, and heart tissue. © 2016 American Association of Physicists in Medicine.

  6. Influence function method for fast estimation of BWR core performance

    International Nuclear Information System (INIS)

    Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.

    1993-01-01

    The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)

  7. Fast LCMV-based Methods for Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll

    2013-01-01

    peaks and require matrix inversions for each point in the search grid. In this paper, we therefore consider fast implementations of LCMV-based fundamental frequency estimators, exploiting the estimators' inherently low displacement rank of the used Toeplitz-like data covariance matrices, using...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...

  8. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  9. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    Science.gov (United States)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is

  10. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  11. Model-based dynamic multi-parameter method for peak power estimation of lithium-ion batteries

    NARCIS (Netherlands)

    Sun, F.; Xiong, R.; He, H.; Li, W.; Aussems, J.E.E.

    2012-01-01

    A model-based dynamic multi-parameter method for peak power estimation is proposed for batteries and battery management systems (BMSs) used in hybrid electric vehicles (HEVs). The available power must be accurately calculated in order to not damage the battery by over charging or over discharging or

  12. A model based estimate of the geometrical acceptance of the e+e- experiment on the HYPERON spectrometer

    International Nuclear Information System (INIS)

    Cerny, V.

    1983-01-01

    A model based estimate is presented of the geometrical acceptance of the HYPERON spectrometer for the detection of the e + e - pairs in the proposed lepton experiment. The results of the Monte Carlo calculation show that the expected acceptance is fairly high. (author)

  13. Estimation of sulfur in coal by fast neutron activation

    International Nuclear Information System (INIS)

    Das, G.C.; Bhattacharyya, P.K.

    1995-01-01

    A simple method is described for estimation of sulfur in coal using fast neutron activation of sulfur, i.e. 32 S(n,p) 32 P and subsequent measurement of 32 P β-activity (1.72 MeV) by a Geiger-Mueller counter. Since the sulfur content of Indian coal ranges from 0.25 to 3%, simulated samples of coal containing sulfur in the range from 0.25 to 3% and common impurities like oxides of aluminium, calcium, iron and silicon have been used to establish the method. (author). 6 refs., 2 figs., 1 tab

  14. Fast human pose estimation using 3D Zernike descriptors

    Science.gov (United States)

    Berjón, Daniel; Morán, Francisco

    2012-03-01

    Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.

  15. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  16. A Fast DOA Estimation Algorithm Based on Polarization MUSIC

    Directory of Open Access Journals (Sweden)

    R. Guo

    2015-04-01

    Full Text Available A fast DOA estimation algorithm developed from MUSIC, which also benefits from the processing of the signals' polarization information, is presented. Besides performance enhancement in precision and resolution, the proposed algorithm can be exerted on various forms of polarization sensitive arrays, without specific requirement on the array's pattern. Depending on the continuity property of the space spectrum, a huge amount of computation incurred in the calculation of 4-D space spectrum is averted. Performance and computation complexity analysis of the proposed algorithm is discussed and the simulation results are presented. Compared with conventional MUSIC, it is indicated that the proposed algorithm has considerable advantage in aspects of precision and resolution, with a low computation complexity proportional to a conventional 2-D MUSIC.

  17. Fast and Robust Nanocellulose Width Estimation Using Turbidimetry.

    Science.gov (United States)

    Shimizu, Michiko; Saito, Tsuguyuki; Nishiyama, Yoshiharu; Iwamoto, Shinichiro; Yano, Hiroyuki; Isogai, Akira; Endo, Takashi

    2016-10-01

    The dimensions of nanocelluloses are important factors in controlling their material properties. The present study reports a fast and robust method for estimating the widths of individual nanocellulose particles based on the turbidities of their water dispersions. Seven types of nanocellulose, including short and rigid cellulose nanocrystals and long and flexible cellulose nanofibers, are prepared via different processes. Their widths are calculated from the respective turbidity plots of their water dispersions, based on the theory of light scattering by thin and long particles. The turbidity-derived widths of the seven nanocelluloses range from 2 to 10 nm, and show good correlations with the thicknesses of nanocellulose particles spread on flat mica surfaces determined using atomic force microscopy. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Fast focus estimation using frequency analysis in digital holography.

    Science.gov (United States)

    Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung

    2014-11-17

    A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.

  19. Allometric Models Based on Bayesian Frameworks Give Better Estimates of Aboveground Biomass in the Miombo Woodlands

    Directory of Open Access Journals (Sweden)

    Shem Kuyah

    2016-02-01

    Full Text Available The miombo woodland is the most extensive dry forest in the world, with the potential to store substantial amounts of biomass carbon. Efforts to obtain accurate estimates of carbon stocks in the miombo woodlands are limited by a general lack of biomass estimation models (BEMs. This study aimed to evaluate the accuracy of most commonly employed allometric models for estimating aboveground biomass (AGB in miombo woodlands, and to develop new models that enable more accurate estimation of biomass in the miombo woodlands. A generalizable mixed-species allometric model was developed from 88 trees belonging to 33 species ranging in diameter at breast height (DBH from 5 to 105 cm using Bayesian estimation. A power law model with DBH alone performed better than both a polynomial model with DBH and the square of DBH, and models including height and crown area as additional variables along with DBH. The accuracy of estimates from published models varied across different sites and trees of different diameter classes, and was lower than estimates from our model. The model developed in this study can be used to establish conservative carbon stocks required to determine avoided emissions in performance-based payment schemes, for example in afforestation and reforestation activities.

  20. Fast estimate of Hartley entropy in image sharpening

    Science.gov (United States)

    Krbcová, Zuzana; Kukal, Jaromír.; Svihlik, Jan; Fliegel, Karel

    2016-09-01

    Two classes of linear IIR filters: Laplacian of Gaussian (LoG) and Difference of Gaussians (DoG) are frequently used as high pass filters for contextual vision and edge detection. They are also used for image sharpening when linearly combined with the original image. Resulting sharpening filters are radially symmetric in spatial and frequency domains. Our approach is based on the radial approximation of unknown optimal filter, which is designed as a weighted sum of Gaussian filters with various radii. The novel filter is designed for MRI image enhancement where the image intensity represents anatomical structure plus additive noise. We prefer the gradient norm of Hartley entropy of whole image intensity as a measure which has to be maximized for the best sharpening. The entropy estimation procedure is as fast as FFT included in the filter but this estimate is a continuous function of enhanced image intensities. Physically motivated heuristic is used for optimum sharpening filter design by its parameter tuning. Our approach is compared with Wiener filter on MRI images.

  1. A model-based initial guess for estimating parameters in systems of ordinary differential equations.

    Science.gov (United States)

    Dattner, Itai

    2015-12-01

    The inverse problem of parameter estimation from noisy observations is a major challenge in statistical inference for dynamical systems. Parameter estimation is usually carried out by optimizing some criterion function over the parameter space. Unless the optimization process starts with a good initial guess, the estimation may take an unreasonable amount of time, and may converge to local solutions, if at all. In this article, we introduce a novel technique for generating good initial guesses that can be used by any estimation method. We focus on the fairly general and often applied class of systems linear in the parameters. The new methodology bypasses numerical integration and can handle partially observed systems. We illustrate the performance of the method using simulations and apply it to real data. © 2015, The International Biometric Society.

  2. Fast Emission Estimates in China Constrained by Satellite Observations (Invited)

    Science.gov (United States)

    Mijling, B.; van der A, R.

    2013-12-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for an emerging economy such as China, where rapid economic growth changes emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. Constraining emissions from concentration measurements is, however, computationally challenging. Within the GlobEmission project of the European Space Agency (ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China, using the CHIMERE model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e.g. shipping emissions). The new emission estimates result in a better

  3. A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes

    Science.gov (United States)

    Bundy, Brian; Krischer, Jeffrey P.

    2016-01-01

    The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448

  4. A model-based approach to sample size estimation in recent onset type 1 diabetes.

    Science.gov (United States)

    Bundy, Brian N; Krischer, Jeffrey P

    2016-11-01

    The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. The cost of universal health care in India: a model based estimate.

    Directory of Open Access Journals (Sweden)

    Shankar Prinja

    Full Text Available INTRODUCTION: As high out-of-pocket healthcare expenses pose heavy financial burden on the families, Government of India is considering a variety of financing and delivery options to universalize health care services. Hence, an estimate of the cost of delivering universal health care services is needed. METHODS: We developed a model to estimate recurrent and annual costs for providing health services through a mix of public and private providers in Chandigarh located in northern India. Necessary health services required to deliver good quality care were defined by the Indian Public Health Standards. National Sample Survey data was utilized to estimate disease burden. In addition, morbidity and treatment data was collected from two secondary and two tertiary care hospitals. The unit cost of treatment was estimated from the published literature. For diseases where data on treatment cost was not available, we collected data on standard treatment protocols and cost of care from local health providers. RESULTS: We estimate that the cost of universal health care delivery through the existing mix of public and private health institutions would be INR 1713 (USD 38, 95%CI USD 18-73 per person per annum in India. This cost would be 24% higher, if branded drugs are used. Extrapolation of these costs to entire country indicates that Indian government needs to spend 3.8% (2.1%-6.8% of the GDP for universalizing health care services. CONCLUSION: The cost of universal health care delivered through a combination of public and private providers is estimated to be INR 1713 per capita per year in India. Important issues such as delivery strategy for ensuring quality, reducing inequities in access, and managing the growth of health care demand need be explored.

  6. The cost of universal health care in India: a model based estimate.

    Science.gov (United States)

    Prinja, Shankar; Bahuguna, Pankaj; Pinto, Andrew D; Sharma, Atul; Bharaj, Gursimer; Kumar, Vishal; Tripathy, Jaya Prasad; Kaur, Manmeet; Kumar, Rajesh

    2012-01-01

    As high out-of-pocket healthcare expenses pose heavy financial burden on the families, Government of India is considering a variety of financing and delivery options to universalize health care services. Hence, an estimate of the cost of delivering universal health care services is needed. We developed a model to estimate recurrent and annual costs for providing health services through a mix of public and private providers in Chandigarh located in northern India. Necessary health services required to deliver good quality care were defined by the Indian Public Health Standards. National Sample Survey data was utilized to estimate disease burden. In addition, morbidity and treatment data was collected from two secondary and two tertiary care hospitals. The unit cost of treatment was estimated from the published literature. For diseases where data on treatment cost was not available, we collected data on standard treatment protocols and cost of care from local health providers. We estimate that the cost of universal health care delivery through the existing mix of public and private health institutions would be INR 1713 (USD 38, 95%CI USD 18-73) per person per annum in India. This cost would be 24% higher, if branded drugs are used. Extrapolation of these costs to entire country indicates that Indian government needs to spend 3.8% (2.1%-6.8%) of the GDP for universalizing health care services. The cost of universal health care delivered through a combination of public and private providers is estimated to be INR 1713 per capita per year in India. Important issues such as delivery strategy for ensuring quality, reducing inequities in access, and managing the growth of health care demand need be explored.

  7. Methodology for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The HINTS is designed to produce reliable estimates at the national and regional levels. GIS maps using HINTS data have been used to provide a visual representation of possible geographic relationships in HINTS cancer-related variables.

  8. Geostatistical model-based estimates of schistosomiasis prevalence among individuals aged = 20 years in West Africa

    DEFF Research Database (Denmark)

    Schur, Nadine; Hürlimann, Eveline; Garba, Amadou

    2011-01-01

    Schistosomiasis is a water-based disease that is believed to affect over 200 million people with an estimated 97% of the infections concentrated in Africa. However, these statistics are largely based on population re-adjusted data originally published by Utroska and colleagues more than 20 years...... ago. Hence, these estimates are outdated due to large-scale preventive chemotherapy programs, improved sanitation, water resources development and management, among other reasons. For planning, coordination, and evaluation of control activities, it is essential to possess reliable schistosomiasis...

  9. Model-based Estimation of Gas Leakage for Fluid Power Accumulators in Wind Turbines

    DEFF Research Database (Denmark)

    Liniger, Jesper; Pedersen, Henrik Clemmensen; N. Soltani, Mohsen

    2017-01-01

    for accumulators, namely gas leakage. The method utilizes an Extended Kalman Filter for joint state and parameter estimation with special attention to limiting the use of sensors to those commonly used in wind turbines. The precision of the method is investigated on an experimental setup which allows for operation...... of the accumulator similar to the conditions in a turbine. The results show that gas leakage is indeed detectable during start-up of the turbine and robust behavior is achieved in a multi-fault environment where both gas and external fluid leakage occur simultaneously. The estimation precision is shown...... to be sensitive to initial conditions for the gas temperature and volume....

  10. A review of global potentially available cropland estimates and their consequences for model-based assessments

    NARCIS (Netherlands)

    Eitelberg, D.A.; van Vliet, J.; Verburg, P.H.

    2015-01-01

    The world's population is growing and demand for food, feed, fiber, and fuel is increasing, placing greater demand on land and its resources for crop production. We review previously published estimates of global scale cropland availability, discuss the underlying assumptions that lead to

  11. Estimating parameters of speciation models based on refined summaries of the joint site-frequency spectrum.

    Directory of Open Access Journals (Sweden)

    Aurélien Tellier

    Full Text Available Understanding the processes and conditions under which populations diverge to give rise to distinct species is a central question in evolutionary biology. Since recently diverged populations have high levels of shared polymorphisms, it is challenging to distinguish between recent divergence with no (or very low inter-population gene flow and older splitting events with subsequent gene flow. Recently published methods to infer speciation parameters under the isolation-migration framework are based on summarizing polymorphism data at multiple loci in two species using the joint site-frequency spectrum (JSFS. We have developed two improvements of these methods based on a more extensive use of the JSFS classes of polymorphisms for species with high intra-locus recombination rates. First, using a likelihood based method, we demonstrate that taking into account low-frequency polymorphisms shared between species significantly improves the joint estimation of the divergence time and gene flow between species. Second, we introduce a local linear regression algorithm that considerably reduces the computational time and allows for the estimation of unequal rates of gene flow between species. We also investigate which summary statistics from the JSFS allow the greatest estimation accuracy for divergence time and migration rates for low (around 10 and high (around 100 numbers of loci. Focusing on cases with low numbers of loci and high intra-locus recombination rates we show that our methods for the estimation of divergence time and migration rates are more precise than existing approaches.

  12. Reference Evapotranspiration Retrievals from a Mesoscale Model Based Weather Variables for Soil Moisture Deficit Estimation

    Directory of Open Access Journals (Sweden)

    Prashant K. Srivastava

    2017-10-01

    Full Text Available Reference Evapotranspiration (ETo and soil moisture deficit (SMD are vital for understanding the hydrological processes, particularly in the context of sustainable water use efficiency in the globe. Precise estimation of ETo and SMD are required for developing appropriate forecasting systems, in hydrological modeling and also in precision agriculture. In this study, the surface temperature downscaled from Weather Research and Forecasting (WRF model is used to estimate ETo using the boundary conditions that are provided by the European Center for Medium Range Weather Forecast (ECMWF. In order to understand the performance, the Hamon’s method is employed to estimate the ETo using the temperature from meteorological station and WRF derived variables. After estimating the ETo, a range of linear and non-linear models is utilized to retrieve SMD. The performance statistics such as RMSE, %Bias, and Nash Sutcliffe Efficiency (NSE indicates that the exponential model (RMSE = 0.226; %Bias = −0.077; NSE = 0.616 is efficient for SMD estimation by using the Observed ETo in comparison to the other linear and non-linear models (RMSE range = 0.019–0.667; %Bias range = 2.821–6.894; NSE = 0.013–0.419 used in this study. On the other hand, in the scenario where SMD is estimated using WRF downscaled meteorological variables based ETo, the linear model is found promising (RMSE = 0.017; %Bias = 5.280; NSE = 0.448 as compared to the non-linear models (RMSE range = 0.022–0.707; %Bias range = −0.207–−6.088; NSE range = 0.013–0.149. Our findings also suggest that all the models are performing better during the growing season (RMSE range = 0.024–0.025; %Bias range = −4.982–−3.431; r = 0.245–0.281 than the non−growing season (RMSE range = 0.011–0.12; %Bias range = 33.073–32.701; r = 0.161–0.244 for SMD estimation.

  13. Model-based PSF and MTF estimation and validation from skeletal clinical CT images.

    Science.gov (United States)

    Pakdel, Amirreza; Mainprize, James G; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M

    2014-01-01

    A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the scanner-specific parameters.

  14. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    International Nuclear Information System (INIS)

    Pakdel, Amirreza; Mainprize, James G.; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M.

    2014-01-01

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the

  15. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    Energy Technology Data Exchange (ETDEWEB)

    Pakdel, Amirreza [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Mainprize, James G.; Robert, Normand [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Fialkov, Jeffery [Division of Plastic Surgery, Sunnybrook Health Sciences Center, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Whyne, Cari M., E-mail: cari.whyne@sunnybrook.ca [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada)

    2014-01-15

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge

  16. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  17. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    Science.gov (United States)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  18. Feasibility Study on Tension Estimation Technique for Hanger Cables Using the FE Model-Based System Identification Method

    Directory of Open Access Journals (Sweden)

    Kyu-Sik Park

    2015-01-01

    Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.

  19. Developing a new solar radiation estimation model based on Buckingham theorem

    Science.gov (United States)

    Ekici, Can; Teke, Ismail

    2018-06-01

    While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.

  20. Type-specific human papillomavirus biological features: validated model-based estimates.

    Directory of Open Access Journals (Sweden)

    Iacopo Baussano

    Full Text Available Infection with high-risk (hr human papillomavirus (HPV is considered the necessary cause of cervical cancer. Vaccination against HPV16 and 18 types, which are responsible of about 75% of cervical cancer worldwide, is expected to have a major global impact on cervical cancer occurrence. Valid estimates of the parameters that regulate the natural history of hrHPV infections are crucial to draw reliable projections of the impact of vaccination. We devised a mathematical model to estimate the probability of infection transmission, the rate of clearance, and the patterns of immune response following the clearance of infection of 13 hrHPV types. To test the validity of our estimates, we fitted the same transmission model to two large independent datasets from Italy and Sweden and assessed finding consistency. The two populations, both unvaccinated, differed substantially by sexual behaviour, age distribution, and study setting (screening for cervical cancer or Chlamydia trachomatis infection. Estimated transmission probability of hrHPV types (80% for HPV16, 73%-82% for HPV18, and above 50% for most other types; clearance rates decreasing as a function of time since infection; and partial protection against re-infection with the same hrHPV type (approximately 20% for HPV16 and 50% for the other types were similar in the two countries. The model could accurately predict the HPV16 prevalence observed in Italy among women who were not infected three years before. In conclusion, our models inform on biological parameters that cannot at the moment be measured directly from any empirical data but are essential to forecast the impact of HPV vaccination programmes.

  1. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    Science.gov (United States)

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  2. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  3. Estimating the costs of induced abortion in Uganda: A model-based analysis

    Science.gov (United States)

    2011-01-01

    Background The demand for induced abortions in Uganda is high despite legal and moral proscriptions. Abortion seekers usually go to illegal, hidden clinics where procedures are performed in unhygienic environments by under-trained practitioners. These abortions, which are usually unsafe, lead to a high rate of severe complications and use of substantial, scarce healthcare resources. This study was performed to estimate the costs associated with induced abortions in Uganda. Methods A decision tree was developed to represent the consequences of induced abortion and estimate the costs of an average case. Data were obtained from a primary chart abstraction study, an on-going prospective study, and the published literature. Societal costs, direct medical costs, direct non-medical costs, indirect (productivity) costs, costs to patients, and costs to the government were estimated. Monte Carlo simulation was used to account for uncertainty. Results The average societal cost per induced abortion (95% credibility range) was $177 ($140-$223). This is equivalent to $64 million in annual national costs. Of this, the average direct medical cost was $65 ($49-86) and the average direct non-medical cost was $19 ($16-$23). The average indirect cost was $92 ($57-$139). Patients incurred $62 ($46-$83) on average while government incurred $14 ($10-$20) on average. Conclusion Induced abortions are associated with substantial costs in Uganda and patients incur the bulk of the healthcare costs. This reinforces the case made by other researchers--that efforts by the government to reduce unsafe abortions by increasing contraceptive coverage or providing safe, legal abortions are critical. PMID:22145859

  4. A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems

    Science.gov (United States)

    Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad

    2015-02-01

    As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.

  5. Biomechanical model-based displacement estimation in micro-sensor motion capture

    International Nuclear Information System (INIS)

    Meng, X L; Sun, S Y; Wu, J K; Zhang, Z Q; 3 Building, 21 Heng Mui Keng Terrace (Singapore))" data-affiliation=" (Department of Electrical and Computer Engineering, National University of Singapore (NUS), 02-02-10 I3 Building, 21 Heng Mui Keng Terrace (Singapore))" >Wong, W C

    2012-01-01

    In micro-sensor motion capture systems, the estimation of the body displacement in the global coordinate system remains a challenge due to lack of external references. This paper proposes a self-contained displacement estimation method based on a human biomechanical model to track the position of walking subjects in the global coordinate system without any additional supporting infrastructures. The proposed approach makes use of the biomechanics of the lower body segments and the assumption that during walking there is always at least one foot in contact with the ground. The ground contact joint is detected based on walking gait characteristics and used as the external references of the human body. The relative positions of the other joints are obtained from hierarchical transformations based on the biomechanical model. Anatomical constraints are proposed to apply to some specific joints of the lower body to further improve the accuracy of the algorithm. Performance of the proposed algorithm is compared with an optical motion capture system. The method is also demonstrated in outdoor and indoor long distance walking scenarios. The experimental results demonstrate clearly that the biomechanical model improves the displacement accuracy within the proposed framework. (paper)

  6. α-Decomposition for estimating parameters in common cause failure modeling based on causal inference

    International Nuclear Information System (INIS)

    Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi

    2013-01-01

    The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system

  7. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    Science.gov (United States)

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  8. Comparison Study on Two Model-Based Adaptive Algorithms for SOC Estimation of Lithium-Ion Batteries in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yong Tian

    2014-12-01

    Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.

  9. Hardware architecture design of a fast global motion estimation method

    Science.gov (United States)

    Liang, Chaobing; Sang, Hongshi; Shen, Xubang

    2015-12-01

    VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.

  10. Model based estimation of sediment erosion in groyne fields along the River Elbe

    International Nuclear Information System (INIS)

    Prohaska, Sandra; Jancke, Thomas; Westrich, Bernhard

    2008-01-01

    River water quality is still a vital environmental issue, even though ongoing emissions of contaminants are being reduced in several European rivers. The mobility of historically contaminated deposits is key issue in sediment management strategy and remediation planning. Resuspension of contaminated sediments impacts the water quality and thus, it is important for river engineering and ecological rehabilitation. The erodibility of the sediments and associated contaminants is difficult to predict due to complex time depended physical, chemical, and biological processes, as well as due to the lack of information. Therefore, in engineering practice the values for erosion parameters are usually assumed to be constant despite their high spatial and temporal variability, which leads to a large uncertainty of the erosion parameters. The goal of presented study is to compare the deterministic approach assuming constant critical erosion shear stress and an innovative approach which takes the critical erosion shear stress as a random variable. Furthermore, quantification of the effective value of the critical erosion shear stress, its applicability in numerical models, and erosion probability will be estimated. The results presented here are based on field measurements and numerical modelling of the River Elbe groyne fields.

  11. Model-Based Estimation of Collision Risks of Predatory Birds with Wind Turbines

    Directory of Open Access Journals (Sweden)

    Marcus Eichhorn

    2012-06-01

    Full Text Available The expansion of renewable energies, such as wind power, is a promising way of mitigating climate change. Because of the risk of collision with rotor blades, wind turbines have negative effects on local bird populations, particularly on raptors such as the Red Kite (Milvus milvus. Appropriate assessment tools for these effects have been lacking. To close this gap, we have developed an agent-based, spatially explicit model that simulates the foraging behavior of the Red Kite around its aerie in a landscape consisting of different land-use types. We determined the collision risk of the Red Kite with the turbine as a function of the distance between the wind turbine and the aerie and other parameters. The impact function comprises the synergistic effects of species-specific foraging behavior and landscape structure. The collision risk declines exponentially with increasing distance. The strength of this decline depends on the raptor's foraging behavior, its ability to avoid wind turbines, and the mean wind speed in the region. The collision risks, which are estimated by the simulation model, are in the range of values observed in the field. The derived impact function shows that the collision risk can be described as an aggregated function of distance between the wind turbine and the raptor's aerie. This allows an easy and rapid assessment of the ecological impacts of (existing or planned wind turbines in relation to their spatial location. Furthermore, it implies that minimum buffer zones for different landscapes can be determined in a defensible way. This modeling approach can be extended to other bird species with central-place foraging behavior. It provides a helpful tool for landscape planning aimed at minimizing the impacts of wind power on biodiversity.

  12. Observation- and model-based estimates of particulate dry nitrogen deposition to the oceans

    Directory of Open Access Journals (Sweden)

    A. R. Baker

    2017-07-01

    expected to be more robust than TM4, while TM4 gives access to speciated parameters (NO3− and NH4+ that are more relevant to the observed parameters and which are not available in ACCMIP. Dry deposition fluxes (CalDep were calculated from the observed concentrations using estimates of dry deposition velocities. Model–observation ratios (RA, n, weighted by grid-cell area and number of observations, were used to assess the performance of the models. Comparison in the three study regions suggests that TM4 overestimates NO3− concentrations (RA, n =  1.4–2.9 and underestimates NH4+ concentrations (RA, n =  0.5–0.7, with spatial distributions in the tropical Atlantic and northern Indian Ocean not being reproduced by the model. In the case of NH4+ in the Indian Ocean, this discrepancy was probably due to seasonal biases in the sampling. Similar patterns were observed in the various comparisons of CalDep to ModDep (RA, n =  0.6–2.6 for NO3−, 0.6–3.1 for NH4+. Values of RA, n for NHx CalDep–ModDep comparisons were approximately double the corresponding values for NH4+ CalDep–ModDep comparisons due to the significant fraction of gas-phase NH3 deposition incorporated in the TM4 and ACCMIP NHx model products. All of the comparisons suffered due to the scarcity of observational data and the large uncertainty in dry deposition velocities used to derive deposition fluxes from concentrations. These uncertainties have been a major limitation on estimates of the flux of material to the oceans for several decades. Recommendations are made for improvements in N deposition estimation through changes in observations, modelling and model–observation comparison procedures. Validation of modelled dry deposition requires effective comparisons to observable aerosol-phase species' concentrations, and this cannot be achieved if model products only report dry deposition flux over the ocean.

  13. Changes in Nature's Balance Sheet: Model-based Estimates of Future Worldwide Ecosystem Services

    Directory of Open Access Journals (Sweden)

    Joseph Alcamo

    2005-12-01

    Full Text Available Four quantitative scenarios are presented that describe changes in worldwide ecosystem services up to 2050-2100. A set of soft-linked global models of human demography, economic development, climate, and biospheric processes are used to quantify these scenarios. The global demand for ecosystem services substantially increases up to 2050: cereal consumption by a factor of 1.5 to 1.7, fish consumption (up to the 2020s by a factor of 1.3 to 1.4, water withdrawals by a factor of 1.3 to 2.0, and biofuel production by a factor of 5.1 to 11.3. The ranges for these estimates reflect differences between the socio-economic assumptions of the scenarios. In all simulations, Sub-Saharan Africa continues to lag behind other parts of the world. Although the demand side of these scenarios presents an overall optimistic view of the future, the supply side is less optimistic: the risk of higher soil erosion (especially in Sub-Saharan Africa and lower water availability (especially in the Middle East could slow down an increase in food production. Meanwhile, increasing wastewater discharges during the same period, especially in Latin America (factor of 2 to 4 and Sub-Saharan Africa (factor of 3.6 to 5.6 could interfere with the delivery of freshwater services. Marine fisheries (despite the growth of aquaculture may not have the ecological capacity to provide for the increased global demand for fish. Our simulations also show an intensification of present tradeoffs between ecosystem services, e.g., expansion of agricultural land (between 2000 and 2050 may be one of the main causes of a 10%-20% loss of total current grassland and forest land and the ecosystem services associated with this land (e.g., genetic resources, wood production, habitat for terrestrial biota and fauna. The scenarios also show that certain hot-spot regions may experience especially rapid changes in ecosystem services: the central part of Africa, southern Asia, and the Middle East. In general

  14. New Software for the Fast Estimation of Population Recombination Rates (FastEPRR in the Genomic Era

    Directory of Open Access Journals (Sweden)

    Feng Gao

    2016-06-01

    Full Text Available Genetic recombination is a very important evolutionary mechanism that mixes parental haplotypes and produces new raw material for organismal evolution. As a result, information on recombination rates is critical for biological research. In this paper, we introduce a new extremely fast open-source software package (FastEPRR that uses machine learning to estimate recombination rate ρ (=4Ner from intraspecific DNA polymorphism data. When ρ>10 and the number of sampled diploid individuals is large enough (≥50, the variance of ρFastEPRR remains slightly smaller than that of ρLDhat. The new estimate ρcomb (calculated by averaging ρFastEPRR and ρLDhat has the smallest variance of all cases. When estimating ρFastEPRR, the finite-site model was employed to analyze cases with a high rate of recurrent mutations, and an additional method is proposed to consider the effect of variable recombination rates within windows. Simulations encompassing a wide range of parameters demonstrate that different evolutionary factors, such as demography and selection, may not increase the false positive rate of recombination hotspots. Overall, accuracy of FastEPRR is similar to the well-known method, LDhat, but requires far less computation time. Genetic maps for each human population (YRI, CEU, and CHB extracted from the 1000 Genomes OMNI data set were obtained in less than 3 d using just a single CPU core. The Pearson Pairwise correlation coefficient between the ρFastEPRR and ρLDhat maps is very high, ranging between 0.929 and 0.987 at a 5-Mb scale. Considering that sample sizes for these kinds of data are increasing dramatically with advances in next-generation sequencing technologies, FastEPRR (freely available at http://www.picb.ac.cn/evolgen/ is expected to become a widely used tool for establishing genetic maps and studying recombination hotspots in the population genomic era.

  15. Intent-Estimation- and Motion-Model-Based Collision Avoidance Method for Autonomous Vehicles in Urban Environments

    Directory of Open Access Journals (Sweden)

    Rulin Huang

    2017-04-01

    Full Text Available Existing collision avoidance methods for autonomous vehicles, which ignore the driving intent of detected vehicles, thus, cannot satisfy the requirements for autonomous driving in urban environments because of their high false detection rates of collisions with vehicles on winding roads and the missed detection rate of collisions with maneuvering vehicles. This study introduces an intent-estimation- and motion-model-based (IEMMB method to address these disadvantages. First, a state vector is constructed by combining the road structure and the moving state of detected vehicles. A Gaussian mixture model is used to learn the maneuvering patterns of vehicles from collected data, and the patterns are used to estimate the driving intent of the detected vehicles. Then, a desirable long-term trajectory is obtained by weighting time and comfort. The long-term trajectory and the short-term trajectory, which are predicted using a constant yaw rate motion model, are fused to achieve an accurate trajectory. Finally, considering the moving state of the autonomous vehicle, collisions can be detected and avoided. Experiments have shown that the intent estimation method performed well, achieving an accuracy of 91.7% on straight roads and an accuracy of 90.5% on winding roads, which is much higher than that achieved by the method that ignores the road structure. The average collision detection distance is increased by more than 8 m. In addition, the maximum yaw rate and acceleration during an evasive maneuver are decreased, indicating an improvement in the driving comfort.

  16. A Model-Based Bayesian Estimation of the Rate of Evolution of VNTR Loci in Mycobacterium tuberculosis

    Science.gov (United States)

    Aandahl, R. Zachariah; Reyes, Josephine F.; Sisson, Scott A.; Tanaka, Mark M.

    2012-01-01

    Variable numbers of tandem repeats (VNTR) typing is widely used for studying the bacterial cause of tuberculosis. Knowledge of the rate of mutation of VNTR loci facilitates the study of the evolution and epidemiology of Mycobacterium tuberculosis. Previous studies have applied population genetic models to estimate the mutation rate, leading to estimates varying widely from around to per locus per year. Resolving this issue using more detailed models and statistical methods would lead to improved inference in the molecular epidemiology of tuberculosis. Here, we use a model-based approach that incorporates two alternative forms of a stepwise mutation process for VNTR evolution within an epidemiological model of disease transmission. Using this model in a Bayesian framework we estimate the mutation rate of VNTR in M. tuberculosis from four published data sets of VNTR profiles from Albania, Iran, Morocco and Venezuela. In the first variant, the mutation rate increases linearly with respect to repeat numbers (linear model); in the second, the mutation rate is constant across repeat numbers (constant model). We find that under the constant model, the mean mutation rate per locus is (95% CI: ,)and under the linear model, the mean mutation rate per locus per repeat unit is (95% CI: ,). These new estimates represent a high rate of mutation at VNTR loci compared to previous estimates. To compare the two models we use posterior predictive checks to ascertain which of the two models is better able to reproduce the observed data. From this procedure we find that the linear model performs better than the constant model. The general framework we use allows the possibility of extending the analysis to more complex models in the future. PMID:22761563

  17. MIDAS-FAST: Design and Validation of a Model-Based Tool to Predict Operator Performance with Robotic Arm Automation

    Science.gov (United States)

    Sebok, Angelia (Principal Investigator); Wickens, Christopher; Gacy, Marc; Brehon, Mark; Scott-Nash, Shelly; Sarter, Nadine; Li, Huiyang; Gore, Brian; Hooey, Becky

    2017-01-01

    The Coalition for Aerospace and Science (CAS) is hosting an exhibition on Capitol Hill on June 14, 2017, to highlight the contributions of CAS members to NASAs portfolio of activities. This exhibition represents an opportunity for an HFES members ground breaking work to be displayed and to build on support within Congress for NASAs human research program including in those areas that are of specific interest to the HFE community. The intent of this poster presentation is to demonstrate the positive outcome that comes from funding HFE related research on a project like the one exemplified by MIDAS-FAST.

  18. A Method for Assessing the Quality of Model-Based Estimates of Ground Temperature and Atmospheric Moisture Using Satellite Data

    Science.gov (United States)

    Wu, Man Li C.; Schubert, Siegfried; Lin, Ching I.; Stajner, Ivanka; Einaudi, Franco (Technical Monitor)

    2000-01-01

    A method is developed for validating model-based estimates of atmospheric moisture and ground temperature using satellite data. The approach relates errors in estimates of clear-sky longwave fluxes at the top of the Earth-atmosphere system to errors in geophysical parameters. The fluxes include clear-sky outgoing longwave radiation (CLR) and radiative flux in the window region between 8 and 12 microns (RadWn). The approach capitalizes on the availability of satellite estimates of CLR and RadWn and other auxiliary satellite data, and multiple global four-dimensional data assimilation (4-DDA) products. The basic methodology employs off-line forward radiative transfer calculations to generate synthetic clear-sky longwave fluxes from two different 4-DDA data sets. Simple linear regression is used to relate the clear-sky longwave flux discrepancies to discrepancies in ground temperature ((delta)T(sub g)) and broad-layer integrated atmospheric precipitable water ((delta)pw). The slopes of the regression lines define sensitivity parameters which can be exploited to help interpret mismatches between satellite observations and model-based estimates of clear-sky longwave fluxes. For illustration we analyze the discrepancies in the clear-sky longwave fluxes between an early implementation of the Goddard Earth Observing System Data Assimilation System (GEOS2) and a recent operational version of the European Centre for Medium-Range Weather Forecasts data assimilation system. The analysis of the synthetic clear-sky flux data shows that simple linear regression employing (delta)T(sub g)) and broad layer (delta)pw provides a good approximation to the full radiative transfer calculations, typically explaining more thin 90% of the 6 hourly variance in the flux differences. These simple regression relations can be inverted to "retrieve" the errors in the geophysical parameters, Uncertainties (normalized by standard deviation) in the monthly mean retrieved parameters range from 7% for

  19. Calculational model based on influence function method for power distribution and control rod worth in fast reactors

    International Nuclear Information System (INIS)

    Toshio, S.; Kazuo, A.

    1983-01-01

    A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: 1. Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. 2. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. 3. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size

  20. Calculational model based on influence function method for power distribution and control rod worth in fast reactors

    International Nuclear Information System (INIS)

    Sanda, T.; Azekura, K.

    1983-01-01

    A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size

  1. A Production Efficiency Model-Based Method for Satellite Estimates of Corn and Soybean Yields in the Midwestern US

    Directory of Open Access Journals (Sweden)

    Andrew E. Suyker

    2013-11-01

    Full Text Available Remote sensing techniques that provide synoptic and repetitive observations over large geographic areas have become increasingly important in studying the role of agriculture in global carbon cycles. However, it is still challenging to model crop yields based on remotely sensed data due to the variation in radiation use efficiency (RUE across crop types and the effects of spatial heterogeneity. In this paper, we propose a production efficiency model-based method to estimate corn and soybean yields with MODerate Resolution Imaging Spectroradiometer (MODIS data by explicitly handling the following two issues: (1 field-measured RUE values for corn and soybean are applied to relatively pure pixels instead of the biome-wide RUE value prescribed in the MODIS vegetation productivity product (MOD17; and (2 contributions to productivity from vegetation other than crops in mixed pixels are deducted at the level of MODIS resolution. Our estimated yields statistically correlate with the national survey data for rainfed counties in the Midwestern US with low errors for both corn (R2 = 0.77; RMSE = 0.89 MT/ha and soybeans (R2 = 0.66; RMSE = 0.38 MT/ha. Because the proposed algorithm does not require any retrospective analysis that constructs empirical relationships between the reported yields and remotely sensed data, it could monitor crop yields over large areas.

  2. Geostatistical Model-Based Estimates of Schistosomiasis Prevalence among Individuals Aged ≤20 Years in West Africa

    Science.gov (United States)

    Schur, Nadine; Hürlimann, Eveline; Garba, Amadou; Traoré, Mamadou S.; Ndir, Omar; Ratard, Raoult C.; Tchuem Tchuenté, Louis-Albert; Kristensen, Thomas K.; Utzinger, Jürg; Vounatsou, Penelope

    2011-01-01

    Background Schistosomiasis is a water-based disease that is believed to affect over 200 million people with an estimated 97% of the infections concentrated in Africa. However, these statistics are largely based on population re-adjusted data originally published by Utroska and colleagues more than 20 years ago. Hence, these estimates are outdated due to large-scale preventive chemotherapy programs, improved sanitation, water resources development and management, among other reasons. For planning, coordination, and evaluation of control activities, it is essential to possess reliable schistosomiasis prevalence maps. Methodology We analyzed survey data compiled on a newly established open-access global neglected tropical diseases database (i) to create smooth empirical prevalence maps for Schistosoma mansoni and S. haematobium for individuals aged ≤20 years in West Africa, including Cameroon, and (ii) to derive country-specific prevalence estimates. We used Bayesian geostatistical models based on environmental predictors to take into account potential clustering due to common spatially structured exposures. Prediction at unobserved locations was facilitated by joint kriging. Principal Findings Our models revealed that 50.8 million individuals aged ≤20 years in West Africa are infected with either S. mansoni, or S. haematobium, or both species concurrently. The country prevalence estimates ranged between 0.5% (The Gambia) and 37.1% (Liberia) for S. mansoni, and between 17.6% (The Gambia) and 51.6% (Sierra Leone) for S. haematobium. We observed that the combined prevalence for both schistosome species is two-fold lower in Gambia than previously reported, while we found an almost two-fold higher estimate for Liberia (58.3%) than reported before (30.0%). Our predictions are likely to overestimate overall country prevalence, since modeling was based on children and adolescents up to the age of 20 years who are at highest risk of infection. Conclusion/Significance We

  3. Statistical estimation of fast-reactor fuel-element lifetime

    International Nuclear Information System (INIS)

    Proshkin, A.A.; Likhachev, Yu.I.; Tuzov, A.N.; Zabud'ko, L.M.

    1980-01-01

    On the basis of a statistical analysis, the main parameters having a significant influence on the theoretical determination of fuel-element lifetimes in the operation of power fast reactors in steady power conditions are isolated. These include the creep and swelling of the fuel and shell materials, prolonged-plasticity lag, shell-material corrosion, gap contact conductivity, and the strain diagrams of the shell and fuel materials obtained for irradiated materials at the corresponding strain rates. By means of deeper investigation of these properties of the materials, it is possible to increase significantly the reliability of fuel-element lifetime predictions in designing fast reactors and to optimize the structure of fuel elements more correctly. The results of such calculations must obviously be taken into account in the cost-benefit analysis of projected new reactors and in choosing the optimal fuel burnup. 9 refs

  4. A fast infrared radiative transfer model based on the adding-doubling method for hyperspectral remote-sensing applications

    International Nuclear Information System (INIS)

    Zhang Zhibo; Yang Ping; Kattawar, George; Huang, H.-L.; Greenwald, Thomas; Li Jun; Baum, Bryan A.; Zhou, Daniel K.; Hu Yongxiang

    2007-01-01

    A fast infrared radiative transfer (RT) model is developed on the basis of the adding-doubling principle, hereafter referred to as FIRTM-AD, to facilitate the forward RT simulations involved in hyperspectral remote-sensing applications under cloudy-sky conditions. A pre-computed look-up table (LUT) of the bidirectional reflection and transmission functions and emissivities of ice clouds in conjunction with efficient interpolation schemes is used in FIRTM-AD to alleviate the computational burden of the doubling process. FIRTM-AD is applicable to a variety of cloud conditions, including vertically inhomogeneous or multilayered clouds. In particular, this RT model is suitable for the computation of high-spectral-resolution radiance and brightness temperature (BT) spectra at both the top-of-atmosphere and surface, and thus is useful for satellite and ground-based hyperspectral sensors. In terms of computer CPU time, FIRTM-AD is approximately 100-250 times faster than the well-known discrete-ordinate (DISORT) RT model for the same conditions. The errors of FIRTM-AD, specified as root-mean-square (RMS) BT differences with respect to their DISORT counterparts, are generally smaller than 0.1 K

  5. Consumers’ estimation of calorie content at fast food restaurants: cross sectional observational study

    OpenAIRE

    Block, Jason Perry; Condon, Suzanne K; Kleinman, Ken Paul; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl Lynn; Gillman, Matthew William

    2013-01-01

    Objective: To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Design: Cross sectional study of repeated visits to fast food restaurant chains. Setting: 89 fast food restaurants in four cities in New England, United States: McDonald’s, Burger King, Subway, Wendy’s, KFC, Dunkin’ Donuts. Participants: 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1...

  6. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  7. Estimating renal function in children: a new GFR-model based on serum cystatin C and body cell mass.

    Science.gov (United States)

    Andersen, Trine Borup

    2012-07-01

    This PhD thesis is based on four individual studies including 131 children aged 2-14 years with nephro-urologic disorders. The majority (72%) of children had a normal renal function (GFR > 82 ml/min/1.73 square metres), and only 8% had a renal function thesis´ main aims were: 1) to develop a more accurate GFR model based on a novel theory of body cell mass (BCM) and cystatin C (CysC); 2) to investigate the diagnostic performance in comparison to other models as well as serum CysC and creatinine; 3) to validate the new models precision and validity. The model´s diagnostic performance was investigated in study I as the ability to detect changes in renal function (total day-to-day variation), and in study IV as the ability to discriminate between normal and reduced function. The model´s precision and validity were indirectly evaluated in study II and III, and in study I accuracy was estimated by comparison to reference GFR. Several prediction models based on CysC or a combination of CysC and serum creatinine have been developed for predicting GFR in children. Despite these efforts to improve GFR estimates, no alternative to exogenous methods has been found and the Schwartz´s formula based on height, creatinine and an empirically derived constant is still recommended for GFR estimation in children. However, the inclusion of BCM as a possible variable in a CysC-based prediction model has not yet been explored. As CysC is produced at a constant rate from all nucleated cells we hypothesize that including BCM in a new prediction model will increase accuracy of the GFR estimate. Study I aimed at deriving the new GFR-prediction model based on the novel theory of CysC and BCM and comparing the performance to previously published models. The BCM-model took the form GFR (mL/min) = 10.2 × (BCM/CysC)E 0.40 × (height × body surface area/Crea)E 0.65. The model predicted 99% within ± 30% of reference GFR, and 67% within ±10%. This was higher than any other model. The

  8. Evaluation of pipeline defect's characteristic axial length via model-based parameter estimation in ultrasonic guided wave-based inspection

    International Nuclear Information System (INIS)

    Wang, Xiaojuan; Tse, Peter W; Dordjevich, Alexandar

    2011-01-01

    The reflection signal from a defect in the process of guided wave-based pipeline inspection usually includes sufficient information to detect and define the defect. In previous research, it has been found that the reflection of guided waves from even a complex defect primarily results from the interference between reflection components generated at the front and the back edges of the defect. The respective contribution of different parameters of a defect to the overall reflection can be affected by the features of the two primary reflection components. The identification of these components embedded in the reflection signal is therefore useful in characterizing the concerned defect. In this research, we propose a method of model-based parameter estimation with the aid of the Hilbert–Huang transform technique for the purpose of decomposition of a reflection signal to enable characterization of the pipeline defect. Once two primary edge reflection components are decomposed and identified, the distance between the reflection positions, which closely relates to the axial length of the defect, could be easily and accurately determined. Considering the irregular profiles of complex pipeline defects at their two edges, which is often the case in real situations, the average of varied axial lengths of such a defect along the circumference of the pipeline is used in this paper as the characteristic value of actual axial length for comparison purpose. The experimental results of artificial defects and real corrosion in sample pipes were considered in this paper to demonstrate the effectiveness of the proposed method

  9. A regional mass balance model based on total ammoniacal nitrogen for estimating ammonia emissions from beef cattle in Alberta Canada

    Science.gov (United States)

    Chai, Lilong; Kröbel, Roland; Janzen, H. Henry; Beauchemin, Karen A.; McGinn, Sean M.; Bittman, Shabtai; Atia, Atta; Edeogu, Ike; MacDonald, Douglas; Dong, Ruilan

    2014-08-01

    Animal feeding operations are primary contributors of anthropogenic ammonia (NH3) emissions in North America and Europe. Mathematical modeling of NH3 volatilization from each stage of livestock manure management allows comprehensive quantitative estimates of emission sources and nutrient losses. A regionally-specific mass balance model based on total ammoniacal nitrogen (TAN) content in animal manure was developed for estimating NH3 emissions from beef farming operations in western Canada. Total N excretion in urine and feces was estimated from animal diet composition, feed dry matter intake and N utilization for beef cattle categories and production stages. Mineralization of organic N, immobilization of TAN, nitrification, and denitrification of N compounds in manure, were incorporated into the model to account for quantities of TAN at each stage of manure handling. Ammonia emission factors were specified for different animal housing (feedlots, barns), grazing, manure storage (including composting and stockpiling) and land spreading (tilled and untilled land), and were modified for temperature. The model computed NH3 emissions from all beef cattle sub-classes including cows, calves, breeding bulls, steers for slaughter, and heifers for slaughter and replacement. Estimated NH3 emissions were about 1.11 × 105 Mg NH3 in Alberta in 2006, with a mean of 18.5 kg animal-1 yr-1 (15.2 kg NH3-N animal-1 yr-1) which is 23.5% of the annual N intake of beef cattle (64.7 kg animal-1 yr-1). The percentage of N intake volatilized as NH3-N was 50% for steers and heifers for slaughter, and between 11 and 14% for all other categories. Steers and heifers for slaughter were the two largest contributors (3.5 × 104 and 3.9 × 104 Mg, respectively) at 31.5 and 32.7% of total NH3 emissions because most growing animals were finished in feedlots. Animal housing and grazing contributed roughly 63% of the total NH3 emissions (feedlots, barns and pastures contributed 54.4, 0.2 and 8.1% of

  10. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  11. Estimate of an environmental magnetic field of fast radio bursts

    International Nuclear Information System (INIS)

    Lin, Wei-Li; Dai, Zi-Gao

    2016-01-01

    Fast radio bursts (FRBs) are a type of newly-discovered transient astronomical phenomenon. They have short durations, high dispersion measures and a high event rate. However, due to unknown distances and undetected electromagnetic counterparts at other wavebands, it is difficult to further investigate FRBs. Here we propose a method to study their environmental magnetic field using an indirect method. Starting with dispersion measures and rotation measures (RMs), we try to obtain the parallel magnetic field component B-bar ‖ which is the average value along the line of sight in the host galaxy. Because both RMs and redshifts are now unavailable, we demonstrate the dependence of B-bar ‖ on these two separate quantities. This result, if the RM and redshift of an FRB are measured, would be expected to provide a clue towards understanding an environmental magnetic field of an FRB. (paper)

  12. Fast Rate Estimation for RDO Mode Decision in HEVC

    Directory of Open Access Journals (Sweden)

    Maxim P. Sharabayko

    2014-12-01

    Full Text Available The latter-day H.265/HEVC video compression standard is able to provide two-times higher compression efficiency compared to the current industrial standard, H.264/AVC. However, coding complexity also increased. The main bottleneck of the compression process is the rate-distortion optimization (RDO stage, as it involves numerous sequential syntax-based binary arithmetic coding (SBAC loops. In this paper, we present an entropy-based RDO estimation technique for H.265/HEVC compression, instead of the common approach based on the SBAC. Our RDO implementation reduces RDO complexity, providing an average bit rate overhead of 1.54%. At the same time, elimination of the SBAC from the RDO estimation reduces block interdependencies, thus providing an opportunity for the development of the compression system with parallel processing of multiple blocks of a video frame.

  13. Fast skin dose estimation system for interventional radiology.

    Science.gov (United States)

    Takata, Takeshi; Kotoku, Jun'ichi; Maejima, Hideyuki; Kumagai, Shinobu; Arai, Norikazu; Kobayashi, Takenori; Shiraishi, Kenshiro; Yamamoto, Masayoshi; Kondo, Hiroshi; Furui, Shigeru

    2018-03-01

    To minimise the radiation dermatitis related to interventional radiology (IR), rapid and accurate dose estimation has been sought for all procedures. We propose a technique for estimating the patient skin dose rapidly and accurately using Monte Carlo (MC) simulation with a graphical processing unit (GPU, GTX 1080; Nvidia Corp.). The skin dose distribution is simulated based on an individual patient's computed tomography (CT) dataset for fluoroscopic conditions after the CT dataset has been segmented into air, water and bone based on pixel values. The skin is assumed to be one layer at the outer surface of the body. Fluoroscopic conditions are obtained from a log file of a fluoroscopic examination. Estimating the absorbed skin dose distribution requires calibration of the dose simulated by our system. For this purpose, a linear function was used to approximate the relation between the simulated dose and the measured dose using radiophotoluminescence (RPL) glass dosimeters in a water-equivalent phantom. Differences of maximum skin dose between our system and the Particle and Heavy Ion Transport code System (PHITS) were as high as 6.1%. The relative statistical error (2 σ) for the simulated dose obtained using our system was ≤3.5%. Using a GPU, the simulation on the chest CT dataset aiming at the heart was within 3.49 s on average: the GPU is 122 times faster than a CPU (Core i7-7700K; Intel Corp.). Our system (using the GPU, the log file, and the CT dataset) estimated the skin dose more rapidly and more accurately than conventional methods.

  14. Fast covariance estimation for innovations computed from a spatial Gibbs point process

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Rubak, Ege

    In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...

  15. Fast Parabola Detection Using Estimation of Distribution Algorithms

    Directory of Open Access Journals (Sweden)

    Jose de Jesus Guerrero-Turrubiates

    2017-01-01

    Full Text Available This paper presents a new method based on Estimation of Distribution Algorithms (EDAs to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications.

  16. Fast Conceptual Cost Estimating of Aerospace Projects Using Historical Information

    Science.gov (United States)

    Butts, Glenn

    2007-01-01

    Accurate estimates can be created in less than a minute by applying powerful techniques and algorithms to create an Excel-based parametric cost model. In five easy steps you will learn how to normalize your company 's historical cost data to the new project parameters. This paper provides a complete, easy-to-understand, step by step how-to guide. Such a guide does not seem to currently exist. Over 2,000 hours of research, data collection, and trial and error, and thousands of lines of Excel Visual Basic Application (VBA) code were invested in developing these methods. While VBA is not required to use this information, it increases the power and aesthetics of the model. Implementing all of the steps described, while not required, will increase the accuracy of the results.

  17. Model-based mean square error estimators for k-nearest neighbour predictions and applications using remotely sensed data for forest inventories

    Science.gov (United States)

    Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo

    2009-01-01

    New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...

  18. Infrared thermography method for fast estimation of phase diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Palomo Del Barrio, Elena [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Cadoret, Régis [Centre National de la Recherche Scientifique, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Daranlot, Julien [Solvay, Laboratoire du Futur, 178 Av du Dr Schweitzer, 33608 Pessac (France); Achchaq, Fouzia, E-mail: fouzia.achchaq@u-bordeaux.fr [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France)

    2016-02-10

    Highlights: • Infrared thermography is proposed to determine phase diagrams in record time. • Phase boundaries are detected by means of emissivity changes during heating. • Transition lines are identified by using Singular Value Decomposition techniques. • Different binary systems have been used for validation purposes. - Abstract: Phase change materials (PCM) are widely used today in thermal energy storage applications. Pure PCMs are rarely used because of non adapted melting points. Instead of them, mixtures are preferred. The search of suitable mixtures, preferably eutectics, is often a tedious and time consuming task which requires the determination of phase diagrams. In order to accelerate this screening step, a new method for estimating phase diagrams in record time (1–3 h) has been established and validated. A sample composed by small droplets of mixtures with different compositions (as many as necessary to have a good coverage of the phase diagram) deposited on a flat substrate is first prepared and cooled down to ambient temperature so that all droplets crystallize. The plate is then heated at constant heating rate up to a sufficiently high temperature for melting all the small crystals. The heating process is imaged by using an infrared camera. An appropriate method based on singular values decomposition technique has been developed to analyze the recorded images and to determine the transition lines of the phase diagram. The method has been applied to determine several simple eutectic phase diagrams and the reached results have been validated by comparison with the phase diagrams obtained by Differential Scanning Calorimeter measurements and by thermodynamic modelling.

  19. Total decay heat estimates in a proto-type fast reactor

    International Nuclear Information System (INIS)

    Sridharan, M.S.

    2003-01-01

    Full text: In this paper, total decay heat values generated in a proto-type fast reactor are estimated. These values are compared with those of certain fast reactors. Simple analytical fits are also obtained for these values which can serve as a handy and convenient tool in engineering design studies. These decay heat values taken as their ratio to the nominal operating power are, in general, applicable to any typical plutonium based fast reactor and are useful inputs to the design of decay-heat removal systems

  20. Detection-Guided Fast Affine Projection Channel Estimator for Speech Applications

    Directory of Open Access Journals (Sweden)

    Yan Wu Jennifer

    2007-04-01

    Full Text Available In various adaptive estimation applications, such as acoustic echo cancellation within teleconferencing systems, the input signal is a highly correlated speech. This, in general, leads to extremely slow convergence of the NLMS adaptive FIR estimator. As a result, for such applications, the affine projection algorithm (APA or the low-complexity version, the fast affine projection (FAP algorithm, is commonly employed instead of the NLMS algorithm. In such applications, the signal propagation channel may have a relatively low-dimensional impulse response structure, that is, the number m of active or significant taps within the (discrete-time modelled channel impulse response is much less than the overall tap length n of the channel impulse response. For such cases, we investigate the inclusion of an active-parameter detection-guided concept within the fast affine projection FIR channel estimator. Simulation results indicate that the proposed detection-guided fast affine projection channel estimator has improved convergence speed and has lead to better steady-state performance than the standard fast affine projection channel estimator, especially in the important case of highly correlated speech input signals.

  1. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  2. A comparative study of three model-based algorithms for estimating state-of-charge of lithium-ion batteries under a new combined dynamic loading profile

    International Nuclear Information System (INIS)

    Yang, Fangfang; Xing, Yinjiao; Wang, Dong; Tsui, Kwok-Leung

    2016-01-01

    Highlights: • Three different model-based filtering algorithms for SOC estimation are compared. • A combined dynamic loading profile is proposed to evaluate the three algorithms. • Robustness against uncertainty of initial states of SOC estimators are investigated. • Battery capacity degradation is considered in SOC estimation. - Abstract: Accurate state-of-charge (SOC) estimation is critical for the safety and reliability of battery management systems in electric vehicles. Because SOC cannot be directly measured and SOC estimation is affected by many factors, such as ambient temperature, battery aging, and current rate, a robust SOC estimation approach is necessary to be developed so as to deal with time-varying and nonlinear battery systems. In this paper, three popular model-based filtering algorithms, including extended Kalman filter, unscented Kalman filter, and particle filter, are respectively used to estimate SOC and their performances regarding to tracking accuracy, computation time, robustness against uncertainty of initial values of SOC, and battery degradation, are compared. To evaluate the performances of these algorithms, a new combined dynamic loading profile composed of the dynamic stress test, the federal urban driving schedule and the US06 is proposed. The comparison results showed that the unscented Kalman filter is the most robust to different initial values of SOC, while the particle filter owns the fastest convergence ability when an initial guess of SOC is far from a true initial SOC.

  3. Safeprops: A Software for Fast and Reliable Estimation of Safety and Environmental Properties for Organic Compounds

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens

    We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....

  4. Fast Reactor Fuel Cycle Cost Estimates for Advanced Fuel Cycle Studies

    International Nuclear Information System (INIS)

    Harrison, Thomas

    2013-01-01

    Presentation Outline: • Why Do I Need a Cost Basis?; • History of the Advanced Fuel Cycle Cost Basis; • Description of the Cost Basis; • Current Work; • Fast Reactor Fuel Cycle Applications; • Sample Fuel Cycle Cost Estimate Analysis; • Future Work

  5. A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding

    Directory of Open Access Journals (Sweden)

    Xue Mengfan

    2016-06-01

    Full Text Available X-ray pulsar-based navigation (XPNAV is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint probability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC and nonlinear least squares (NLS estimators, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML estimators.

  6. Assessing the external validity of model-based estimates of the incidence of heart attack in England: a modelling study

    Directory of Open Access Journals (Sweden)

    Peter Scarborough

    2016-11-01

    Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.

  7. Consumers' estimation of calorie content at fast food restaurants: cross sectional observational study.

    Science.gov (United States)

    Block, Jason P; Condon, Suzanne K; Kleinman, Ken; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl; Gillman, Matthew W

    2013-05-23

    To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Cross sectional study of repeated visits to fast food restaurant chains. 89 fast food restaurants in four cities in New England, United States: McDonald's, Burger King, Subway, Wendy's, KFC, Dunkin' Donuts. 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1178 adolescents visiting restaurants after school or at lunchtime in 2010 and 2011. Estimated calorie content of purchased meals. Among adults, adolescents, and school age children, the mean actual calorie content of meals was 836 calories (SD 465), 756 calories (SD 455), and 733 calories (SD 359), respectively. A calorie is equivalent to 4.18 kJ. Compared with the actual figures, participants underestimated calorie content by means of 175 calories (95% confidence interval 145 to 205), 259 calories (227 to 291), and 175 calories (108 to 242), respectively. In multivariable linear regression models, underestimation of calorie content increased substantially as the actual meal calorie content increased. Adults and adolescents eating at Subway estimated 20% and 25% lower calorie content than McDonald's diners (relative change 0.80, 95% confidence interval 0.66 to 0.96; 0.75, 0.57 to 0.99). People eating at fast food restaurants underestimate the calorie content of meals, especially large meals. Education of consumers through calorie menu labeling and other outreach efforts might reduce the large degree of underestimation.

  8. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  9. A fast and automatically paired 2-D direction-of-arrival estimation with and without estimating the mutual coupling coefficients

    Science.gov (United States)

    Filik, Tansu; Tuncer, T. Engin

    2010-06-01

    A new technique is proposed for the solution of pairing problem which is observed when fast algorithms are used for two-dimensional (2-D) direction-of-arrival (DOA) estimation. Proposed method is integrated with array interpolation for efficient use of antenna elements. Two virtual arrays are generated which are positioned accordingly with respect to the real array. ESPRIT algorithm is used by employing both the real and virtual arrays. The eigenvalues of the rotational transformation matrix have the angle information at both magnitude and phase which allows the estimation of azimuth and elevation angles by using closed-form expressions. This idea is used to obtain the paired interpolated ESPRIT algorithm which can be applied for arbitrary arrays when there is no mutual coupling. When there is mutual coupling, two approaches are proposed in order to obtain 2-D paired DOA estimates. These blind methods can be applied for the array geometries which have mutual coupling matrices with a Toeplitz structure. The first approach finds the 2-D paired DOA angles without estimating the mutual coupling coefficients. The second approach estimates the coupling coefficients and iteratively improves both the coupling coefficients and the DOA estimates. It is shown that the proposed techniques solve the pairing problem for uniform circular arrays and effectively estimate the DOA angles in case of unknown mutual coupling.

  10. A Novel Data-Driven Fast Capacity Estimation of Spent Electric Vehicle Lithium-ion Batteries

    Directory of Open Access Journals (Sweden)

    Caiping Zhang

    2014-12-01

    Full Text Available Fast capacity estimation is a key enabling technique for second-life of lithium-ion batteries due to the hard work involved in determining the capacity of a large number of used electric vehicle (EV batteries. This paper tries to make three contributions to the existing literature through a robust and advanced algorithm: (1 a three layer back propagation artificial neural network (BP ANN model is developed to estimate the battery capacity. The model employs internal resistance expressing the battery’s kinetics as the model input, which can realize fast capacity estimation; (2 an estimation error model is established to investigate the relationship between the robustness coefficient and regression coefficient. It is revealed that commonly used ANN capacity estimation algorithm is flawed in providing robustness of parameter measurement uncertainties; (3 the law of large numbers is used as the basis for a proposed robust estimation approach, which optimally balances the relationship between estimation accuracy and disturbance rejection. An optimal range of the threshold for robustness coefficient is also discussed and proposed. Experimental results demonstrate the efficacy and the robustness of the BP ANN model together with the proposed identification approach, which can provide an important basis for large scale applications of second-life of batteries.

  11. The global burden of snakebite: a literature analysis and modelling based on regional estimates of envenoming and deaths.

    Directory of Open Access Journals (Sweden)

    Anuradhani Kasturiratne

    2008-11-01

    Full Text Available BACKGROUND: Envenoming resulting from snakebites is an important public health problem in many tropical and subtropical countries. Few attempts have been made to quantify the burden, and recent estimates all suffer from the lack of an objective and reproducible methodology. In an attempt to provide an accurate, up-to-date estimate of the scale of the global problem, we developed a new method to estimate the disease burden due to snakebites. METHODS AND FINDINGS: The global estimates were based on regional estimates that were, in turn, derived from data available for countries within a defined region. Three main strategies were used to obtain primary data: electronic searching for publications on snakebite, extraction of relevant country-specific mortality data from databases maintained by United Nations organizations, and identification of grey literature by discussion with key informants. Countries were grouped into 21 distinct geographic regions that are as epidemiologically homogenous as possible, in line with the Global Burden of Disease 2005 study (Global Burden Project of the World Bank. Incidence rates for envenoming were extracted from publications and used to estimate the number of envenomings for individual countries; if no data were available for a particular country, the lowest incidence rate within a neighbouring country was used. Where death registration data were reliable, reported deaths from snakebite were used; in other countries, deaths were estimated on the basis of observed mortality rates and the at-risk population. We estimate that, globally, at least 421,000 envenomings and 20,000 deaths occur each year due to snakebite. These figures may be as high as 1,841,000 envenomings and 94,000 deaths. Based on the fact that envenoming occurs in about one in every four snakebites, between 1.2 million and 5.5 million snakebites could occur annually. CONCLUSIONS: Snakebites cause considerable morbidity and mortality worldwide. The

  12. Model-based estimation of breast percent density in raw and processed full-field digital mammography images from image-acquisition physics and patient-image characteristics

    Science.gov (United States)

    Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina

    2012-03-01

    Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.

  13. Fast Estimation Method of Space-Time Two-Dimensional Positioning Parameters Based on Hadamard Product

    Directory of Open Access Journals (Sweden)

    Haiwen Li

    2018-01-01

    Full Text Available The estimation speed of positioning parameters determines the effectiveness of the positioning system. The time of arrival (TOA and direction of arrival (DOA parameters can be estimated by the space-time two-dimensional multiple signal classification (2D-MUSIC algorithm for array antenna. However, this algorithm needs much time to complete the two-dimensional pseudo spectral peak search, which makes it difficult to apply in practice. Aiming at solving this problem, a fast estimation method of space-time two-dimensional positioning parameters based on Hadamard product is proposed in orthogonal frequency division multiplexing (OFDM system, and the Cramer-Rao bound (CRB is also presented. Firstly, according to the channel frequency domain response vector of each array, the channel frequency domain estimation vector is constructed using the Hadamard product form containing location information. Then, the autocorrelation matrix of the channel response vector for the extended array element in frequency domain and the noise subspace are calculated successively. Finally, by combining the closed-form solution and parameter pairing, the fast joint estimation for time delay and arrival direction is accomplished. The theoretical analysis and simulation results show that the proposed algorithm can significantly reduce the computational complexity and guarantee that the estimation accuracy is not only better than estimating signal parameters via rotational invariance techniques (ESPRIT algorithm and 2D matrix pencil (MP algorithm but also close to 2D-MUSIC algorithm. Moreover, the proposed algorithm also has certain adaptability to multipath environment and effectively improves the ability of fast acquisition of location parameters.

  14. A dynamic model-based estimate of the value of a vanadium redox flow battery for frequency regulation in Texas

    International Nuclear Information System (INIS)

    Fares, Robert L.; Meyers, Jeremy P.; Webber, Michael E.

    2014-01-01

    Highlights: • A model is implemented to describe the dynamic voltage of a vanadium flow battery. • The model is used with optimization to maximize the utility of the battery. • A vanadium flow battery’s value for regulation service is approximately $1500/kW. - Abstract: Building on past work seeking to value emerging energy storage technologies in grid-based applications, this paper introduces a dynamic model-based framework to value a vanadium redox flow battery (VRFB) participating in Texas’ organized electricity market. Our model describes the dynamic behavior of a VRFB system’s voltage and state of charge based on the instantaneous charging or discharging power required from the battery. We formulate an optimization problem that incorporates the model to show the potential value of a VRFB used for frequency regulation service in Texas. The optimization is implemented in Matlab using the large-scale, interior-point, nonlinear optimization algorithm, with the objective function gradient, nonlinear constraint gradients, and Hessian matrix specified analytically. Utilizing market prices and other relevant data from the Electric Reliability Council of Texas (ERCOT), we find that a VRFB system used for frequency regulation service could be worth approximately $1500/kW

  15. On Channel Estimation for OFDM/TDM Using MMSE-FDE in a Fast Fading Channel

    Directory of Open Access Journals (Sweden)

    Gacanin Haris

    2009-01-01

    Full Text Available Abstract MMSE-FDE can improve the transmission performance of OFDM combined with time division multiplexing (OFDM/TDM, but knowledge of the channel state information and the noise variance is required to compute the MMSE weight. In this paper, a performance evaluation of OFDM/TDM using MMSE-FDE with pilot-assisted channel estimation over a fast fading channel is presented. To improve the tracking ability against fast fading a robust pilot-assisted channel estimation is presented that uses time-domain filtering on a slot-by-slot basis and frequency-domain interpolation. We derive the mean square error (MSE of the channel estimator and then discuss a tradeoff between improving the tracking ability against fading and the noise reduction. The achievable bit error rate (BER performance is evaluated by computer simulation and compared with conventional OFDM. It is shown that the OFDM/TDM using MMSE-FDE achieves a lower BER and a better tracking ability against fast fading in comparison with conventional OFDM.

  16. SAD PROCESSOR FOR MULTIPLE MACROBLOCK MATCHING IN FAST SEARCH VIDEO MOTION ESTIMATION

    Directory of Open Access Journals (Sweden)

    Nehal N. Shah

    2015-02-01

    Full Text Available Motion estimation is a very important but computationally complex task in video coding. Process of determining motion vectors based on the temporal correlation of consecutive frame is used for video compression. In order to reduce the computational complexity of motion estimation and maintain the quality of encoding during motion compensation, different fast search techniques are available. These block based motion estimation algorithms use the sum of absolute difference (SAD between corresponding macroblock in current frame and all the candidate macroblocks in the reference frame to identify best match. Existing implementations can perform SAD between two blocks using sequential or pipeline approach but performing multi operand SAD in single clock cycle with optimized recourses is state of art. In this paper various parallel architectures for computation of the fixed block size SAD is evaluated and fast parallel SAD architecture is proposed with optimized resources. Further SAD processor is described with 9 processing elements which can be configured for any existing fast search block matching algorithm. Proposed SAD processor consumes 7% fewer adders compared to existing implementation for one processing elements. Using nine PE it can process 84 HD frames per second in worse case which is good outcome for real time implementation. In average case architecture process 325 HD frames per second.

  17. An automated multi-model based evapotranspiration estimation framework for understanding crop-climate interactions in India

    Science.gov (United States)

    Bhattarai, N.; Jain, M.; Mallick, K.

    2017-12-01

    A remote sensing based multi-model evapotranspiration (ET) estimation framework is developed using MODIS and NASA Merra-2 reanalysis data for data poor regions, and we apply this framework to the Indian subcontinent. The framework eliminates the need for in-situ calibration data and hence estimates ET completely from space and is replicable across all regions in the world. Currently, six surface energy balance models ranging from widely-used SEBAL, METRIC, and SEBS to moderately-used S-SEBI, SSEBop, and a relatively new model, STIC1.2 are being integrated and validated. Preliminary analysis suggests good predictability of the models for estimating near- real time ET under clear sky conditions from various crop types in India with coefficient of determination 0.32-0.55 and percent bias -15%-28%, when compared against Bowen Ratio based ET estimates. The results are particularly encouraging given that no direct ground input data were used in the analysis. The framework is currently being extended to estimate seasonal ET across the Indian subcontinent using a model-ensemble approach that uses all available MODIS 8-day datasets since 2000. These ET products are being used to monitor inter-seasonal and inter-annual dynamics of ET and crop water use across different crop and irrigation practices in India. Particularly, the potential impacts of changes in precipitation patterns and extreme heat (e.g., extreme degree days) on seasonal crop water consumption is being studied. Our ET products are able to locate the water stress hotspots that need to be targeted with water saving interventions to maintain agricultural production in the face of climate variability and change.

  18. Optimal Orientation Planning and Control Deviation Estimation on FAST Cable-Driven Parallel Robot

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-03-01

    Full Text Available The paper is devoted theoretically to the optimal orientation planning and control deviation estimation of FAST cable-driven parallel robot. Regarding the robot characteristics, the solutions are obtained from two constrained optimizations, both of which are based on the equilibrium of the cabin and the attention on force allocation among 6 cable tensions. A kind of control algorithm is proposed based on the position and force feedbacks. The analysis proves that the orientation control depends on force feedback and the optimal tension solution corresponding to the planned orientation. Finally, the estimation of orientation deviation is given under the limit range of tension errors.

  19. Model-based stochastic-deterministic State and Force Estimation using Kalman filtering with Application to Hanko-1 Channel Marker

    OpenAIRE

    Petersen, Øyvind Wiig

    2014-01-01

    Force identification in structural dynamics is an inverse problem concerned with finding loads from measured structural response. The main objective of this thesis is to perform and study state (displacement and velocity) and force estimation by Kalman filtering. Theory on optimal control and state-space models are presented, adapted to linear structural dynamics. Accommodation for measurement noise and model inaccuracies are attained by stochastic-deterministic coupling. Explicit requirem...

  20. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    Science.gov (United States)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  1. A Memory Hierarchy Model Based on Data Reuse for Full-Search Motion Estimation on High-Definition Digital Videos

    Directory of Open Access Journals (Sweden)

    Alba Sandyra Bezerra Lopes

    2012-01-01

    Full Text Available The motion estimation is the most complex module in a video encoder requiring a high processing throughput and high memory bandwidth, mainly when the focus is high-definition videos. The throughput problem can be solved increasing the parallelism in the internal operations. The external memory bandwidth may be reduced using a memory hierarchy. This work presents a memory hierarchy model for a full-search motion estimation core. The proposed memory hierarchy model is based on a data reuse scheme considering the full search algorithm features. The proposed memory hierarchy expressively reduces the external memory bandwidth required for the motion estimation process, and it provides a very high data throughput for the ME core. This throughput is necessary to achieve real time when processing high-definition videos. When considering the worst bandwidth scenario, this memory hierarchy is able to reduce the external memory bandwidth in 578 times. A case study for the proposed hierarchy, using 32×32 search window and 8×8 block size, was implemented and prototyped on a Virtex 4 FPGA. The results show that it is possible to reach 38 frames per second when processing full HD frames (1920×1080 pixels using nearly 299 Mbytes per second of external memory bandwidth.

  2. Trip Energy Estimation Methodology and Model Based on Real-World Driving Data for Green Routing Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Holden, Jacob [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Van Til, Harrison J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gonder, Jeffrey D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhu, Lei [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-09

    A data-informed model to predict energy use for a proposed vehicle trip has been developed in this paper. The methodology leverages nearly 1 million miles of real-world driving data to generate the estimation model. Driving is categorized at the sub-trip level by average speed, road gradient, and road network geometry, then aggregated by category. An average energy consumption rate is determined for each category, creating an energy rates look-up table. Proposed vehicle trips are then categorized in the same manner, and estimated energy rates are appended from the look-up table. The methodology is robust and applicable to almost any type of driving data. The model has been trained on vehicle global positioning system data from the Transportation Secure Data Center at the National Renewable Energy Laboratory and validated against on-road fuel consumption data from testing in Phoenix, Arizona. The estimation model has demonstrated an error range of 8.6% to 13.8%. The model results can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations to reduce energy consumption. This work provides a highly extensible framework that allows the model to be tuned to a specific driver or vehicle type.

  3. The association between estimated average glucose levels and fasting plasma glucose levels

    Directory of Open Access Journals (Sweden)

    Giray Bozkaya

    2010-01-01

    Full Text Available OBJECTIVE: The level of hemoglobin A1c (HbA1c, also known as glycated hemoglobin, determines how well a patient's blood glucose level has been controlled over the previous 8-12 weeks. HbA1c levels help patients and doctors understand whether a particular diabetes treatment is working and whether adjustments need to be made to the treatment. Because the HbA1c level is a marker of blood glucose for the previous 120 days, average blood glucose levels can be estimated using HbA1c levels. Our aim in the present study was to investigate the relationship between estimated average glucose levels, as calculated by HbA1c levels, and fasting plasma glucose levels. METHODS: The fasting plasma glucose levels of 3891 diabetic patient samples (1497 male, 2394 female were obtained from the laboratory information system used for HbA1c testing by the Department of Internal Medicine at the Izmir Bozyaka Training and Research Hospital in Turkey. These samples were selected from patient samples that had hemoglobin levels between 12 and 16 g/dL. The estimated glucose levels were calculated using the following formula: 28.7 x HbA1c - 46.7. Glucose and HbA1c levels were determined using hexokinase and high performance liquid chromatography (HPLC methods, respectively. RESULTS: A strong positive correlation between fasting plasma glucose levels and estimated average blood glucose levels (r=0.757, p<0.05 was observed. The difference was statistically significant. CONCLUSION: Reporting the estimated average glucose level together with the HbA1c level is believed to assist patients and doctors determine the effectiveness of blood glucose control measures.

  4. FAST

    DEFF Research Database (Denmark)

    Zuidmeer-Jongejan, Laurian; Fernandez-Rivas, Montserrat; Poulsen, Lars K.

    2012-01-01

    ABSTRACT: The FAST project (Food Allergy Specific Immunotherapy) aims at the development of safe and effective treatment of food allergies, targeting prevalent, persistent and severe allergy to fish and peach. Classical allergen-specific immunotherapy (SIT), using subcutaneous injections with aqu...

  5. Consumers’ estimation of calorie content at fast food restaurants: cross sectional observational study

    Science.gov (United States)

    Condon, Suzanne K; Kleinman, Ken; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl; Gillman, Matthew W

    2013-01-01

    Objective To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Design Cross sectional study of repeated visits to fast food restaurant chains. Setting 89 fast food restaurants in four cities in New England, United States: McDonald’s, Burger King, Subway, Wendy’s, KFC, Dunkin’ Donuts. Participants 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1178 adolescents visiting restaurants after school or at lunchtime in 2010 and 2011. Main outcome measure Estimated calorie content of purchased meals. Results Among adults, adolescents, and school age children, the mean actual calorie content of meals was 836 calories (SD 465), 756 calories (SD 455), and 733 calories (SD 359), respectively. A calorie is equivalent to 4.18 kJ. Compared with the actual figures, participants underestimated calorie content by means of 175 calories (95% confidence interval 145 to 205), 259 calories (227 to 291), and 175 calories (108 to 242), respectively. In multivariable linear regression models, underestimation of calorie content increased substantially as the actual meal calorie content increased. Adults and adolescents eating at Subway estimated 20% and 25% lower calorie content than McDonald’s diners (relative change 0.80, 95% confidence interval 0.66 to 0.96; 0.75, 0.57 to 0.99). Conclusions People eating at fast food restaurants underestimate the calorie content of meals, especially large meals. Education of consumers through calorie menu labeling and other outreach efforts might reduce the large degree of underestimation. PMID:23704170

  6. Consumer estimation of recommended and actual calories at fast food restaurants.

    Science.gov (United States)

    Elbel, Brian

    2011-10-01

    Recently, localities across the United States have passed laws requiring the mandatory labeling of calories in all chain restaurants, including fast food restaurants. This policy is set to be implemented at the federal level. Early studies have found these policies to be at best minimally effective in altering food choice at a population level. This paper uses receipt and survey data collected from consumers outside fast food restaurants in low-income communities in New York City (NYC) (which implemented labeling) and a comparison community (which did not) to examine two fundamental assumptions necessary (though not sufficient) for calorie labeling to be effective: that consumers know how many calories they should be eating throughout the course of a day and that currently customers improperly estimate the number of calories in their fast food order. Then, we examine whether mandatory menu labeling influences either of these assumptions. We find that approximately one-third of consumers properly estimate that the number of calories an adult should consume daily. Few (8% on average) believe adults should be eating over 2,500 calories daily, and approximately one-third believe adults should eat lesser than 1,500 calories daily. Mandatory labeling in NYC did not change these findings. However, labeling did increase the number of low-income consumers who correctly estimated (within 100 calories) the number of calories in their fast food meal, from 15% before labeling in NYC increasing to 24% after labeling. Overall knowledge remains low even with labeling. Additional public policies likely need to be considered to influence obesity on a large scale.

  7. Estimating severity of sideways fall using a generic multi linear regression model based on kinematic input variables.

    Science.gov (United States)

    van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V

    2017-03-21

    Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Model-based inverse estimation for active contraction stresses of tongue muscles using 3D surface shape in speech production.

    Science.gov (United States)

    Koike, Narihiko; Ii, Satoshi; Yoshinaga, Tsukasa; Nozaki, Kazunori; Wada, Shigeo

    2017-11-07

    This paper presents a novel inverse estimation approach for the active contraction stresses of tongue muscles during speech. The proposed method is based on variational data assimilation using a mechanical tongue model and 3D tongue surface shapes for speech production. The mechanical tongue model considers nonlinear hyperelasticity, finite deformation, actual geometry from computed tomography (CT) images, and anisotropic active contraction by muscle fibers, the orientations of which are ideally determined using anatomical drawings. The tongue deformation is obtained by solving a stationary force-equilibrium equation using a finite element method. An inverse problem is established to find the combination of muscle contraction stresses that minimizes the Euclidean distance of the tongue surfaces between the mechanical analysis and CT results of speech production, where a signed-distance function represents the tongue surface. Our approach is validated through an ideal numerical example and extended to the real-world case of two Japanese vowels, /ʉ/ and /ɯ/. The results capture the target shape completely and provide an excellent estimation of the active contraction stresses in the ideal case, and exhibit similar tendencies as in previous observations and simulations for the actual vowel cases. The present approach can reveal the relative relationship among the muscle contraction stresses in similar utterances with different tongue shapes, and enables the investigation of the coordination of tongue muscles during speech using only the deformed tongue shape obtained from medical images. This will enhance our understanding of speech motor control. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Xu

    2014-01-01

    Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  10. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    Science.gov (United States)

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  11. Estimation of energy saving thanks to a reduced-model-based approach: Example of bread baking by jet impingement

    International Nuclear Information System (INIS)

    Alamir, M.; Witrant, E.; Della Valle, G.; Rouaud, O.; Josset, Ch.; Boillereaux, L.

    2013-01-01

    In this paper, a reduced order mechanistic model is proposed for the evolution of temperature and humidity during French bread baking. The model parameters are identified using experimental data. The resulting model is then used to estimate the potential energy saving that can be obtained using jet impingement technology when used to increase the heat transfer efficiency. Results show up to 16% potential energy saving under certain assumptions. - Highlights: ► We developed a mechanistic model of heat and mass transfer in bread including different and multiple energy sources. ► An optimal control system permits to track references trajectories with a minimization of energy consuming. ► The methodology is evaluated with jet impingement technique. ► Results show a significant energy saving of about 17% of energy with reasonable actuator variations

  12. Estimating solid waste generation by hospitality industry during major festivals: A quantification model based on multiple regression.

    Science.gov (United States)

    Abdulredha, Muhammad; Al Khaddar, Rafid; Jordan, David; Kot, Patryk; Abdulridha, Ali; Hashim, Khalid

    2018-04-26

    Major-religious festivals hosted in the city of Kerbala, Iraq, annually generate large quantities of Municipal Solid Waste (MSW) which negatively impacts the environment and human health when poorly managed. The hospitality sector, specifically hotels, is one of the major sources of MSW generated during these festivals. Because it is essential to establish a proper waste management system for such festivals, accurate information regarding MSW generation is required. This study therefore investigated the rate of production of MSW from hotels in Kerbala during major festivals. A field questionnaire survey was conducted with 150 hotels during the Arba'een festival, one of the largest festivals in the world, attended by about 18 million participants, to identify how much MSW is produced and what features of hotels impact on this. Hotel managers responded to questions regarding features of the hotel such as size (Hs), expenditure (Hex), area (Ha) and number of staff (Hst). An on-site audit was also carried out with all participating hotels to estimate the mass of MSW generated from these hotels. The results indicate that MSW produced by hotels varies widely. In general, it was found that each hotel guest produces an estimated 0.89 kg of MSW per day. However, this figure varies according to the hotels' rating. Average rates of MSW production from one and four star hotels were 0.83 and 1.22 kg per guest per day, respectively. Statistically, it was found that the relationship between MSW production and hotel features can be modelled with an R 2 of 0.799, where the influence of hotel feature on MSW production followed the order Hs > Hex > Hst. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.

    2011-01-01

    Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.

  14. Online model-based estimation of state-of-charge and open-circuit voltage of lithium-ion batteries in electric vehicles

    International Nuclear Information System (INIS)

    He, Hongwen; Zhang, Xiaowei; Xiong, Rui; Xu, Yongli; Guo, Hongqiang

    2012-01-01

    This paper presents a method to estimate the state-of-charge (SOC) of a lithium-ion battery, based on an online identification of its open-circuit voltage (OCV), according to the battery’s intrinsic relationship between the SOC and the OCV for application in electric vehicles. Firstly an equivalent circuit model with n RC networks is employed modeling the polarization characteristic and the dynamic behavior of the lithium-ion battery, the corresponding equations are built to describe its electric behavior and a recursive function is deduced for the online identification of the OCV, which is implemented by a recursive least squares (RLS) algorithm with an optimal forgetting factor. The models with different RC networks are evaluated based on the terminal voltage comparisons between the model-based simulation and the experiment. Then the OCV-SOC lookup table is built based on the experimental data performed by a linear interpolation of the battery voltages at the same SOC during two consecutive discharge and charge cycles. Finally a verifying experiment is carried out based on nine Urban Dynamometer Driving Schedules. It indicates that the proposed method can ensure an acceptable accuracy of SOC estimation for online application with a maximum error being less than 5.0%. -- Highlights: ► An equivalent circuit model with n RC networks is built for lithium-ion batteries. ► A recursive function is deduced for the online estimation of the model parameters like OCV and R O . ► The relationship between SOC and OCV is built with a linear interpolation method by experiments. ► The experiments show the online model-based SOC estimation is reasonable with enough accuracy.

  15. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    Science.gov (United States)

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Model based on GRID-derived descriptors for estimating CYP3A4 enzyme stability of potential drug candidates

    Science.gov (United States)

    Crivori, Patrizia; Zamora, Ismael; Speed, Bill; Orrenius, Christian; Poggesi, Italo

    2004-03-01

    A number of computational approaches are being proposed for an early optimization of ADME (absorption, distribution, metabolism and excretion) properties to increase the success rate in drug discovery. The present study describes the development of an in silico model able to estimate, from the three-dimensional structure of a molecule, the stability of a compound with respect to the human cytochrome P450 (CYP) 3A4 enzyme activity. Stability data were obtained by measuring the amount of unchanged compound remaining after a standardized incubation with human cDNA-expressed CYP3A4. The computational method transforms the three-dimensional molecular interaction fields (MIFs) generated from the molecular structure into descriptors (VolSurf and Almond procedures). The descriptors were correlated to the experimental metabolic stability classes by a partial least squares discriminant procedure. The model was trained using a set of 1800 compounds from the Pharmacia collection and was validated using two test sets: the first one including 825 compounds from the Pharmacia collection and the second one consisting of 20 known drugs. This model correctly predicted 75% of the first and 85% of the second test set and showed a precision above 86% to correctly select metabolically stable compounds. The model appears a valuable tool in the design of virtual libraries to bias the selection toward more stable compounds. Abbreviations: ADME - absorption, distribution, metabolism and excretion; CYP - cytochrome P450; MIFs - molecular interaction fields; HTS - high throughput screening; DDI - drug-drug interactions; 3D - three-dimensional; PCA - principal components analysis; CPCA - consensus principal components analysis; PLS - partial least squares; PLSD - partial least squares discriminant; GRIND - grid independent descriptors; GRID - software originally created and developed by Professor Peter Goodford.

  17. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  18. Improved target detection and bearing estimation utilizing fast orthogonal search for real-time spectral analysis

    International Nuclear Information System (INIS)

    Osman, Abdalla; El-Sheimy, Naser; Nourledin, Aboelamgd; Theriault, Jim; Campbell, Scott

    2009-01-01

    The problem of target detection and tracking in the ocean environment has attracted considerable attention due to its importance in military and civilian applications. Sonobuoys are one of the capable passive sonar systems used in underwater target detection. Target detection and bearing estimation are mainly obtained through spectral analysis of received signals. The frequency resolution introduced by current techniques is limited which affects the accuracy of target detection and bearing estimation at a relatively low signal-to-noise ratio (SNR). This research investigates the development of a bearing estimation method using fast orthogonal search (FOS) for enhanced spectral estimation. FOS is employed in this research in order to improve both target detection and bearing estimation in the case of low SNR inputs. The proposed methods were tested using simulated data developed for two different scenarios under different underwater environmental conditions. The results show that the proposed method is capable of enhancing the accuracy for target detection as well as bearing estimation especially in cases of a very low SNR

  19. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    2002-09-01

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  20. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.

    Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  1. A fast algorithm for estimating transmission probabilities in QTL detection designs with dense maps

    Directory of Open Access Journals (Sweden)

    Gilbert Hélène

    2009-11-01

    Full Text Available Abstract Background In the case of an autosomal locus, four transmission events from the parents to progeny are possible, specified by the grand parental origin of the alleles inherited by this individual. Computing the probabilities of these transmission events is essential to perform QTL detection methods. Results A fast algorithm for the estimation of these probabilities conditional to parental phases has been developed. It is adapted to classical QTL detection designs applied to outbred populations, in particular to designs composed of half and/or full sib families. It assumes the absence of interference. Conclusion The theory is fully developed and an example is given.

  2. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    International Nuclear Information System (INIS)

    2010-01-01

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM and FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the

  3. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    Energy Technology Data Exchange (ETDEWEB)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the

  4. Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: a tutorial.

    Science.gov (United States)

    Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J

    2014-02-01

    Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model

  5. Estimation of the radial force on the tokamak vessel wall during fast transient events

    Energy Technology Data Exchange (ETDEWEB)

    Pustovitov, V. D., E-mail: pustovitov-vd@nrcki.ru [National Research Center Kurchatov Institute (Russian Federation)

    2016-11-15

    The radial force balance in a tokamak during fast transient events with a duration much shorter than the resistive time of the vacuum vessel wall is analyzed. The aim of the work is to analytically estimate the resulting integral radial force on the wall. In contrast to the preceding study [Plasma Phys. Rep. 41, 952 (2015)], where a similar problem was considered for thermal quench, simultaneous changes in the profiles and values of the pressure and plasma current are allowed here. Thereby, the current quench and various methods of disruption mitigation used in the existing tokamaks and considered for future applications are also covered. General formulas for the force at an arbitrary sequence or combination of events are derived, and estimates for the standard tokamak model are made. The earlier results and conclusions are confirmed, and it is shown that, in the disruption mitigation scenarios accepted for ITER, the radial forces can be as high as in uncontrolled disruptions.

  6. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    Science.gov (United States)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  7. Development and validation of a two-dimensional fast-response flood estimation model

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV OF UTAK

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.

  8. Estimation of post disruption plasma temperature for fast current quench Aditya plasma shots

    International Nuclear Information System (INIS)

    Purohit, S.; Chowdhuri, M.B.; Joisa, Y.S.; Raval, J.V.; Ghosh, J.; Jha, R.

    2013-01-01

    Characteristics of tokamak current quenches are an important issue for the determination of electromagnetic forces that act on the in-vessel components and vacuum vessel during major disruptions. It is observed that thermal quench is followed by a sharp current decay. Fast current quench disruptive plasma shots were investigated for ADITYA tokamak. The current decay time was determined for the selected shots, which were in the range of 0.8 msec to 2.5 msec. This current decay information was then applied to L/R model, frequently employed for the estimation of the current decay time in tokamak plasmas, considering plasma inductance and plasma resistivity. This methodology was adopted for the estimation of the post disruption plasma temperature using the experimentally observed current decay time for the fast current quench disruptive ADITYA plasma shots. The study reveals that for the identified shots there is a constant increase in the current decay time with the post disruption plasma temperature. The investigations also explore the behavior post disruption plasma temperature and the current decay time as a function of the edge safety factor, Q. Post disruption plasma temperature and the current decay time exhibits a decrease with the increase in the value Q. (author)

  9. Global Performance of a Fast Parameterization Scheme for Estimating Surface Solar Radiation from MODIS data

    Science.gov (United States)

    Tang, W.; Yang, K.; Sun, Z.; Qin, J.; Niu, X.

    2016-12-01

    A fast parameterization scheme named SUNFLUX is used in this study to estimate instantaneous surface solar radiation (SSR) based on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard both Terra and Aqua platforms. The scheme mainly takes into account the absorption and scattering processes due to clouds, aerosols and gas in the atmosphere. The estimated instantaneous SSR is evaluated against surface observations obtained from seven stations of the Surface Radiation Budget Network (SURFRAD), four stations in the North China Plain (NCP) and 40 stations of the Baseline Surface Radiation Network (BSRN). The statistical results for evaluation against these three datasets show that the relative root-mean-square error (RMSE) values of SUNFLUX are less than 15%, 16% and 17%, respectively. Daily SSR is derived through temporal upscaling from the MODIS-based instantaneous SSR estimates, and is validated against surface observations. The relative RMSE values for daily SSR estimates are about 16% at the seven SURFRAD stations, four NCP stations, 40 BSRN stations and 90 China Meteorological Administration (CMA) radiation stations.

  10. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    Science.gov (United States)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  11. Fast simulation of transport and adaptive permeability estimation in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Berre, Inga

    2005-07-01

    The focus of the thesis is twofold: Both fast simulation of transport in porous media and adaptive estimation of permeability are considered. A short introduction that motivates the work on these topics is given in Chapter 1. In Chapter 2, the governing equations for one- and two-phase flow in porous media are presented. Overall numerical solution strategies for the two-phase flow model are also discussed briefly. The concepts of streamlines and time-of-flight are introduced in Chapter 3. Methods for computing streamlines and time-of-flight are also presented in this chapter. Subsequently, in Chapters 4 and 5, the focus is on simulation of transport in a time-of-flight perspective. In Chapter 4, transport of fluids along streamlines is considered. Chapter 5 introduces a different viewpoint based on the evolution of isocontours of the fluid saturation. While the first chapters focus on the forward problem, which consists in solving a mathematical model given the reservoir parameters, Chapters 6, 7 and 8 are devoted to the inverse problem of permeability estimation. An introduction to the problem of identifying spatial variability in reservoir permeability by inversion of dynamic production data is given in Chapter 6. In Chapter 7, adaptive multiscale strategies for permeability estimation are discussed. Subsequently, Chapter 8 presents a level-set approach for improving piecewise constant permeability representations. Finally, Chapter 9 summarizes the results obtained in the thesis; in addition, the chapter gives some recommendations and suggests directions for future work. Part II In Part II, the following papers are included in the order they were completed: Paper A: A Streamline Front Tracking Method for Two- and Three-Phase Flow Including Capillary Forces. I. Berre, H. K. Dahle, K. H. Karlsen, and H. F. Nordhaug. In Fluid flow and transport in porous media: mathematical and numerical treatment (South Hadley, MA, 2001), volume 295 of Contemp. Math., pages 49

  12. DMPDS: A Fast Motion Estimation Algorithm Targeting High Resolution Videos and Its FPGA Implementation

    Directory of Open Access Journals (Sweden)

    Gustavo Sanchez

    2012-01-01

    Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.

  13. Estimation of fast neutron fluence in steel specimens type Laguna Verde in TRIGA Mark III reactor

    International Nuclear Information System (INIS)

    Galicia A, J.; Francois L, J. L.; Aguilar H, F.

    2015-09-01

    The main purpose of this work is to obtain the fluence of fast neutrons recorded within four specimens of carbon steel, similar to the material having the vessels of the BWR reactors of the nuclear power plant of Laguna Verde when subjected to neutron flux in a experimental facility of the TRIGA Mark III reactor, calculating an irradiation time to age the material so accelerated. For the calculation of the neutron flux in the specimens was used the Monte Carlo code MCNP5. In an initial stage, three sheets of natural molybdenum and molybdenum trioxide (MoO 3 ) were incorporated into a model developed of the TRIGA reactor operating at 1 M Wth, to calculate the resulting activity by setting a certain time of irradiation. The results obtained were compared with experimentally measured activities in these same materials to validate the calculated neutron flux in the model used. Subsequently, the fast neutron flux received by the steel specimens to incorporate them in the experimental facility E-16 of the reactor core model operating at nominal maximum power in steady-state was calculated, already from these calculations the irradiation time required was obtained for values of the neutron flux in the range of 10 18 n/cm 2 , which is estimated for the case of Laguna Verde after 32 years of effective operation at maximum power. (Author)

  14. Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures

    International Nuclear Information System (INIS)

    Procacci, Piero

    2015-01-01

    In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of only two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems

  15. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki; Park, Kihong; Alouini, Mohamed-Slim

    2017-01-01

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  16. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki

    2017-07-28

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  17. Estimation of interplate coupling along Nankai trough considering the block motion model based on onland GNSS and seafloor GPS/A observation data using MCMC method

    Science.gov (United States)

    Kimura, H.; Ito, T.; Tadokoro, K.

    2017-12-01

    Introduction In southwest Japan, Philippine sea plate is subducting under the overriding plate such as Amurian plate, and mega interplate earthquakes has occurred at about 100 years interval. There is no occurrence of mega interplate earthquakes in southwest Japan, although it has passed about 70 years since the last mega interplate earthquakes: 1944 and 1946 along Nankai trough, meaning that the strain has been accumulated at plate interface. Therefore, it is essential to reveal the interplate coupling more precisely for predicting or understanding the mechanism of next occurring mega interplate earthquake. Recently, seafloor geodetic observation revealed the detailed interplate coupling distribution in expected source region of Nankai trough earthquake (e.g., Yokota et al. [2016]). In this study, we estimated interplate coupling in southwest Japan, considering block motion model and using seafloor geodetic observation data as well as onland GNSS observation data, based on Markov Chain Monte Carlo (MCMC) method. Method Observed crustal deformation is assumed that sum of rigid block motion and elastic deformation due to coupling at block boundaries. We modeled this relationship as a non-linear inverse problem that the unknown parameters are Euler pole of each block and coupling at each subfault, and solved them simultaneously based on MCMC method. Input data we used in this study are 863 onland GNSS observation data and 24 seafloor GPS/A observation data. We made some block division models based on the map of active fault tracing and selected the best model based on Akaike's Information Criterion (AIC): that is consist of 12 blocks. Result We find that the interplate coupling along Nankai trough has heterogeneous spatial distribution, strong at the depth of 0 to 20km at off Tokai region, and 0 to 30km at off Shikoku region. Moreover, we find that observed crustal deformation at off Tokai region is well explained by elastic deformation due to subducting Izu Micro

  18. Application of GRACE to the assessment of model-based estimates of monthly Greenland Ice Sheet mass balance (2003-2012)

    Science.gov (United States)

    Schlegel, Nicole-Jeanne; Wiese, David N.; Larour, Eric Y.; Watkins, Michael M.; Box, Jason E.; Fettweis, Xavier; van den Broeke, Michiel R.

    2016-09-01

    Quantifying the Greenland Ice Sheet's future contribution to sea level rise is a challenging task that requires accurate estimates of ice sheet sensitivity to climate change. Forward ice sheet models are promising tools for estimating future ice sheet behavior, yet confidence is low because evaluation of historical simulations is challenging due to the scarcity of continental-wide data for model evaluation. Recent advancements in processing of Gravity Recovery and Climate Experiment (GRACE) data using Bayesian-constrained mass concentration ("mascon") functions have led to improvements in spatial resolution and noise reduction of monthly global gravity fields. Specifically, the Jet Propulsion Laboratory's JPL RL05M GRACE mascon solution (GRACE_JPL) offers an opportunity for the assessment of model-based estimates of ice sheet mass balance (MB) at ˜ 300 km spatial scales. Here, we quantify the differences between Greenland monthly observed MB (GRACE_JPL) and that estimated by state-of-the-art, high-resolution models, with respect to GRACE_JPL and model uncertainties. To simulate the years 2003-2012, we force the Ice Sheet System Model (ISSM) with anomalies from three different surface mass balance (SMB) products derived from regional climate models. Resulting MB is compared against GRACE_JPL within individual mascons. Overall, we find agreement in the northeast and southwest where MB is assumed to be primarily controlled by SMB. In the interior, we find a discrepancy in trend, which we presume to be related to millennial-scale dynamic thickening not considered by our model. In the northwest, seasonal amplitudes agree, but modeled mass trends are muted relative to GRACE_JPL. Here, discrepancies are likely controlled by temporal variability in ice discharge and other related processes not represented by our model simulations, i.e., hydrological processes and ice-ocean interaction. In the southeast, GRACE_JPL exhibits larger seasonal amplitude than predicted by the

  19. V2676 Oph: Estimating Physical Parameters of a Moderately Fast Nova

    Science.gov (United States)

    Raj, A.; Pavana, M.; Kamath, U. S.; Anupama, G. C.; Walter, F. M.

    2018-03-01

    Using our previously reported observations, we derive some physical parameters of the moderately fast nova V2676 Oph 2012 #1. The best-fit Cloudy model of the nebular spectrum obtained on 2015 May 8 shows a hot white dwarf source with TBB≍1.0×105 K having a luminosity of 1.0×1038 erg/s. Our abundance analysis shows that the ejecta are significantly enhanced relative to solar, He/H=2.14, O/H=2.37, S/H=6.62 and Ar/H=3.25. The ejecta mass is estimated to be 1.42×10-5 M⊙. The nova showed a pronounced dust formation phase after 90 d from discovery. The J-H and H-K colors were very large as compared to other molecule- and dust-forming novae in recent years. The dust temperature and mass at two epochs have been estimated from spectral energy distribution fits to infrared photometry.

  20. FPSoC-Based Architecture for a Fast Motion Estimation Algorithm in H.264/AVC

    Directory of Open Access Journals (Sweden)

    Obianuju Ndili

    2009-01-01

    Full Text Available There is an increasing need for high quality video on low power, portable devices. Possible target applications range from entertainment and personal communications to security and health care. While H.264/AVC answers the need for high quality video at lower bit rates, it is significantly more complex than previous coding standards and thus results in greater power consumption in practical implementations. In particular, motion estimation (ME, in H.264/AVC consumes the largest power in an H.264/AVC encoder. It is therefore critical to speed-up integer ME in H.264/AVC via fast motion estimation (FME algorithms and hardware acceleration. In this paper, we present our hardware oriented modifications to a hybrid FME algorithm, our architecture based on the modified algorithm, and our implementation and prototype on a PowerPC-based Field Programmable System on Chip (FPSoC. Our results show that the modified hybrid FME algorithm on average, outperforms previous state-of-the-art FME algorithms, while its losses when compared with FSME, in terms of PSNR performance and computation time, are insignificant. We show that although our implementation platform is FPGA-based, our implementation results compare favourably with previous architectures implemented on ASICs. Finally we also show an improvement over some existing architectures implemented on FPGAs.

  1. Fast and Accurate Video PQoS Estimation over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Emanuele Viterbo

    2008-06-01

    Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.

  2. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    International Nuclear Information System (INIS)

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-01-01

    spatiotemporal model parameter estimates, and use Monte Carlo simulations to a fast algorithm for computing the covariance matrix for the parameters. Given this covariance matrix, the covariance between the time-activity curve models for the blood input function and tissue volumes of interest can be calculated and used to estimate compartmental model kinetic parameters more precisely, using nonlinear weighted least squares [10,11

  3. TU-FG-BRB-03: Basis Vector Model Based Method for Proton Stopping Power Estimation From Experimental Dual Energy CT Data

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, S; Politte, D; O’Sullivan, J [Washington University in St. Louis, St. Louis, MO (United States); Han, D; Porras-Chaverri, M; Williamson, J [Virginia Commonwealth University, Richmond, VA (United States); Whiting, B [University of Pittsburgh, Pittsburgh, PA (United States)

    2016-06-15

    Purpose: This work aims at reducing the uncertainty in proton stopping power (SP) estimation by a novel combination of a linear, separable basis vector model (BVM) for stopping power calculation (Med Phys 43:600) and a statistical, model-based dual-energy CT (DECT) image reconstruction algorithm (TMI 35:685). The method was applied to experimental data. Methods: BVM assumes the photon attenuation coefficients, electron densities, and mean excitation energies (I-values) of unknown materials can be approximated by a combination of the corresponding quantities of two reference materials. The DECT projection data for a phantom with 5 different known materials was collected on a Philips Brilliance scanner using two scans at 90 kVp and 140 kVp. The line integral alternating minimization (LIAM) algorithm was used to recover the two BVM coefficient images using the measured source spectra. The proton stopping powers are then estimated from the Bethe-Bloch equation using electron densities and I-values derived from the BVM coefficients. The proton stopping powers and proton ranges for the phantom materials estimated via our BVM based DECT method are compared to ICRU reference values and a post-processing DECT analysis (Yang PMB 55:1343) applied to vendorreconstructed images using the Torikoshi parametric fit model (tPFM). Results: For the phantom materials, the average stopping power estimations for 175 MeV protons derived from our method are within 1% of the ICRU reference values (except for Teflon with a 1.48% error), with an average standard deviation of 0.46% over pixels. The resultant proton ranges agree with the reference values within 2 mm. Conclusion: Our principled DECT iterative reconstruction algorithm, incorporating optimal beam hardening and scatter corrections, in conjunction with a simple linear BVM model, achieves more accurate and robust proton stopping power maps than the post-processing, nonlinear tPFM based DECT analysis applied to conventional

  4. Model-based approach for quantitative estimates of skin, heart, and lung toxicity risk for left-side photon and proton irradiation after breast-conserving surgery.

    Science.gov (United States)

    Tommasino, Francesco; Durante, Marco; D'Avino, Vittoria; Liuzzi, Raffaele; Conson, Manuel; Farace, Paolo; Palma, Giuseppe; Schwarz, Marco; Cella, Laura; Pacelli, Roberto

    2017-05-01

    Proton beam therapy represents a promising modality for left-side breast cancer (BC) treatment, but concerns have been raised about skin toxicity and poor cosmesis. The aim of this study is to apply skin normal tissue complication probability (NTCP) model for intensity modulated proton therapy (IMPT) optimization in left-side BC. Ten left-side BC patients undergoing photon irradiation after breast-conserving surgery were randomly selected from our clinical database. Intensity modulated photon (IMRT) and IMPT plans were calculated with iso-tumor-coverage criteria and according to RTOG 1005 guidelines. Proton plans were computed with and without skin optimization. Published NTCP models were employed to estimate the risk of different toxicity endpoints for skin, lung, heart and its substructures. Acute skin NTCP evaluation suggests a lower toxicity level with IMPT compared to IMRT when the skin is included in proton optimization strategy (0.1% versus 1.7%, p < 0.001). Dosimetric results show that, with the same level of tumor coverage, IMPT attains significant heart and lung dose sparing compared with IMRT. By NTCP model-based analysis, an overall reduction in the cardiopulmonary toxicity risk prediction can be observed for all IMPT compared to IMRT plans: the relative risk reduction from protons varies between 0.1 and 0.7 depending on the considered toxicity endpoint. Our analysis suggests that IMPT might be safely applied without increasing the risk of severe acute radiation induced skin toxicity. The quantitative risk estimates also support the potential clinical benefits of IMPT for left-side BC irradiation due to lower risk of cardiac and pulmonary morbidity. The applied approach might be relevant on the long term for the setup of cost-effectiveness evaluation strategies based on NTCP predictions.

  5. Fast, accurate, and robust frequency offset estimation based on modified adaptive Kalman filter in coherent optical communication system

    Science.gov (United States)

    Yang, Yanfu; Xiang, Qian; Zhang, Qun; Zhou, Zhongqing; Jiang, Wen; He, Qianwen; Yao, Yong

    2017-09-01

    We propose a joint estimation scheme for fast, accurate, and robust frequency offset (FO) estimation along with phase estimation based on modified adaptive Kalman filter (MAKF). The scheme consists of three key modules: extend Kalman filter (EKF), lock detector, and FO cycle slip recovery. The EKF module estimates time-varying phase induced by both FO and laser phase noise. The lock detector module makes decision between acquisition mode and tracking mode and consequently sets the EKF tuning parameter in an adaptive manner. The third module can detect possible cycle slip in the case of large FO and make proper correction. Based on the simulation and experimental results, the proposed MAKF has shown excellent estimation performance featuring high accuracy, fast convergence, as well as the capability of cycle slip recovery.

  6. Fast emission estimates in China and South Africa constrained by satellite observations

    Science.gov (United States)

    Mijling, Bas; van der A, Ronald

    2013-04-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for emerging economies such as China and South Africa, where rapid economic growth change emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. However, constraining emissions from observations of concentrations is computationally challenging. Within the GlobEmission project (part of the Data User Element programme of ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China and South Africa, using the CHIMERE chemical transport model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e

  7. A fast-running core prediction model based on neural networks for load-following operations in a soluble boron-free reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Jin-wook [Korea Atomic Energy Research Institute, P.O. Box 105, Yusong, Daejon 305-600 (Korea, Republic of)], E-mail: Jinwook@kaeri.re.kr; Seong, Seung-Hwan [Korea Atomic Energy Research Institute, P.O. Box 105, Yusong, Daejon 305-600 (Korea, Republic of)], E-mail: shseong@kaeri.re.kr; Lee, Un-Chul [Department of Nuclear Engineering, Seoul National University, Shinlim-Dong, Gwanak-Gu, Seoul 151-742 (Korea, Republic of)

    2007-09-15

    A fast prediction model for load-following operations in a soluble boron-free reactor has been proposed, which can predict the core status when three or more control rod groups are moved at a time. This prediction model consists of two multilayer feedforward neural network models to retrieve the axial offset and the reactivity, and compensation models to compensate for the reactivity and axial offset arising from the xenon transient. The neural network training data were generated by taking various overlaps among the control rod groups into consideration for training the neural network models, and the accuracy of the constructed neural network models was verified. Validation results of predicting load following operations for a soluble boron-free reactor show that this model has a good capability to predict the positions of the control rods for sustaining the criticality of a core during load-following operations to ensure that the tolerable axial offset band is not exceeded and it can provide enough corresponding time for the operators to take the necessary actions to prevent a deviation from the tolerable operating band.

  8. A fast-running core prediction model based on neural networks for load-following operations in a soluble boron-free reactor

    International Nuclear Information System (INIS)

    Jang, Jin-wook; Seong, Seung-Hwan; Lee, Un-Chul

    2007-01-01

    A fast prediction model for load-following operations in a soluble boron-free reactor has been proposed, which can predict the core status when three or more control rod groups are moved at a time. This prediction model consists of two multilayer feedforward neural network models to retrieve the axial offset and the reactivity, and compensation models to compensate for the reactivity and axial offset arising from the xenon transient. The neural network training data were generated by taking various overlaps among the control rod groups into consideration for training the neural network models, and the accuracy of the constructed neural network models was verified. Validation results of predicting load following operations for a soluble boron-free reactor show that this model has a good capability to predict the positions of the control rods for sustaining the criticality of a core during load-following operations to ensure that the tolerable axial offset band is not exceeded and it can provide enough corresponding time for the operators to take the necessary actions to prevent a deviation from the tolerable operating band

  9. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    Science.gov (United States)

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides

  10. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  11. Estimation of pathological tremor from recorded signals based on adaptive sliding fast Fourier transform

    Directory of Open Access Journals (Sweden)

    Shengxin Wang

    2016-06-01

    Full Text Available Pathological tremor is an approximately rhythmic movement and considerably affects patients’ daily living activities. Biomechanical loading and functional electrical stimulation are proposed as potential alternatives for canceling the pathological tremor. However, the performance of suppression methods is associated with the separation of tremor from the recorded signals. In this literature, an algorithm incorporating a fast Fourier transform augmented with a sliding convolution window, an interpolation procedure, and a damping module of the frequency is presented to isolate tremulous components from the measured signals and estimate the instantaneous tremor frequency. Meanwhile, a mechanism platform is designed to provide the simulation tremor signals with different degrees of voluntary movements. The performance of the proposed algorithm and existing procedures is compared with simulated signals and experimental signals collected from patients. The results demonstrate that the proposed solution could detect the unknown dominant frequency and distinguish the tremor components with higher accuracy. Therefore, this algorithm is useful for actively compensating tremor by functional electrical stimulation without affecting the voluntary movement.

  12. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh

    2014-06-01

    We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.

  13. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem

    OpenAIRE

    Muller , Antoine; Pontonnier , Charles; Dumont , Georges

    2018-01-01

    International audience; The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions – two polynomial criteria and a min/max criterion – were tested on a planar musculoskeletal model. The MusIC method provides a computation frequenc...

  14. A Discussion on Uncertainty Representation and Interpretation in Model-based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    Data.gov (United States)

    National Aeronautics and Space Administration — This article presented a discussion on uncertainty representation and management for model-based prog- nostics methodologies based on the Bayesian tracking framework...

  15. Consumer Estimation of Recommended and Actual Calories at Fast Food Restaurants

    OpenAIRE

    Elbel, Brian

    2011-01-01

    Recently, localities across the United States have passed laws requiring the mandatory labeling of calories in all chain restaurants, including fast food restaurants. This policy is set to be implemented at the federal level. Early studies have found these policies to be at best minimally effective in altering food choice at a population level. This paper uses receipt and survey data collected from consumers outside fast food restaurants in low-income communities in New York City (NYC) (which...

  16. Estimation of the cost-effectiveness of HIV prevention portfolios for people who inject drugs in the United States: A model-based analysis.

    Directory of Open Access Journals (Sweden)

    Cora L Bernard

    2017-05-01

    Full Text Available The risks of HIV transmission associated with the opioid epidemic make cost-effective programs for people who inject drugs (PWID a public health priority. Some of these programs have benefits beyond prevention of HIV-a critical consideration given that injection drug use is increasing across most United States demographic groups. To identify high-value HIV prevention program portfolios for US PWID, we consider combinations of four interventions with demonstrated efficacy: opioid agonist therapy (OAT, needle and syringe programs (NSPs, HIV testing and treatment (Test & Treat, and oral HIV pre-exposure prophylaxis (PrEP.We adapted an empirically calibrated dynamic compartmental model and used it to assess the discounted costs (in 2015 US dollars, health outcomes (HIV infections averted, change in HIV prevalence, and discounted quality-adjusted life years [QALYs], and incremental cost-effectiveness ratios (ICERs of the four prevention programs, considered singly and in combination over a 20-y time horizon. We obtained epidemiologic, economic, and health utility parameter estimates from the literature, previously published models, and expert opinion. We estimate that expansions of OAT, NSPs, and Test & Treat implemented singly up to 50% coverage levels can be cost-effective relative to the next highest coverage level (low, medium, and high at 40%, 45%, and 50%, respectively and that OAT, which we assume to have immediate and direct health benefits for the individual, has the potential to be the highest value investment, even under scenarios where it prevents fewer infections than other programs. Although a model-based analysis can provide only estimates of health outcomes, we project that, over 20 y, 50% coverage with OAT could avert up to 22,000 (95% CI: 5,200, 46,000 infections and cost US$18,000 (95% CI: US$14,000, US$24,000 per QALY gained, 50% NSP coverage could avert up to 35,000 (95% CI: 8,900, 43,000 infections and cost US$25,000 (95% CI: US

  17. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    Science.gov (United States)

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  18. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    Science.gov (United States)

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-09-04

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.

  19. On the fast estimation of transit times application to BWR simulated data

    International Nuclear Information System (INIS)

    Antonopoulos-Domis, M.; Marseguerra, M.; Padovani, E.

    1996-01-01

    Real time estimators of transit times are proposed. BWR noise is simulated including a global component due to rod vibration. The time obtained form the simulation is used to investigate the robustness and noise immunity of the estimators. It is found that, in presence of a coincident (global) signal, the cross-correlation function is the worst estimator. (authors)

  20. Finite Time Fault Tolerant Control for Robot Manipulators Using Time Delay Estimation and Continuous Nonsingular Fast Terminal Sliding Mode Control.

    Science.gov (United States)

    Van, Mien; Ge, Shuzhi Sam; Ren, Hongliang

    2016-04-28

    In this paper, a novel finite time fault tolerant control (FTC) is proposed for uncertain robot manipulators with actuator faults. First, a finite time passive FTC (PFTC) based on a robust nonsingular fast terminal sliding mode control (NFTSMC) is investigated. Be analyzed for addressing the disadvantages of the PFTC, an AFTC are then investigated by combining NFTSMC with a simple fault diagnosis scheme. In this scheme, an online fault estimation algorithm based on time delay estimation (TDE) is proposed to approximate actuator faults. The estimated fault information is used to detect, isolate, and accommodate the effect of the faults in the system. Then, a robust AFTC law is established by combining the obtained fault information and a robust NFTSMC. Finally, a high-order sliding mode (HOSM) control based on super-twisting algorithm is employed to eliminate the chattering. In comparison to the PFTC and other state-of-the-art approaches, the proposed AFTC scheme possess several advantages such as high precision, strong robustness, no singularity, less chattering, and fast finite-time convergence due to the combined NFTSMC and HOSM control, and requires no prior knowledge of the fault due to TDE-based fault estimation. Finally, simulation results are obtained to verify the effectiveness of the proposed strategy.

  1. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  2. Validation and uncertainty estimation of fast neutron activation analysis method for Cu, Fe, Al, Si elements in sediment samples

    International Nuclear Information System (INIS)

    Sunardi; Samin Prihatin

    2010-01-01

    Validation and uncertainty estimation of Fast Neutron Activation Analysis (FNAA) method for Cu, Fe, Al, Si elements in sediment samples has been conduced. The aim of the research is to confirm whether FNAA method is still matches to ISO/lEC 17025-2005 standard. The research covered the verification, performance, validation of FNM and uncertainty estimation. Standard of SRM 8704 and sediments were weighted for certain weight and irradiated with 14 MeV fast neutron and then counted using gamma spectrometry. The result of validation method for Cu, Fe, Al, Si element showed that the accuracy were in the range of 95.89-98.68 %, while the precision were in the range 1.13-2.29 %. The result of uncertainty estimation for Cu, Fe, Al, and Si were 2.67, 1.46, 1.71 and 1.20 % respectively. From this data, it can be concluded that the FNM method is still reliable and valid for element contents analysis in samples, because the accuracy is up to 95 % and the precision is under 5 %, while the uncertainty are relatively small and suitable for the range 95 % level of confidence where the uncertainty maximum is 5 %. (author)

  3. Very Fast Estimation of Epicentral Distance and Magnitude from a Single Three Component Seismic Station Using Machine Learning Techniques

    Science.gov (United States)

    Ochoa Gutierrez, L. H.; Niño Vasquez, L. F.; Vargas-Jimenez, C. A.

    2012-12-01

    To minimize adverse effects originated by high magnitude earthquakes, early warning has become a powerful tool to anticipate a seismic wave arrival to an specific location and lets to bring people and government agencies opportune information to initiate a fast response. To do this, a very fast and accurate characterization of the event must be done but this process is often made using seismograms recorded in at least 4 stations where processing time is usually greater than the wave travel time to the interest area, mainly in coarse networks. A faster process can be done if only one three component seismic station is used that is the closest unsaturated station respect to the epicenter. Here we present a Support Vector Regression algorithm which calculates Magnitude and Epicentral Distance using only 5 seconds of signal since P wave onset. This algorithm was trained with 36 records of historical earthquakes where the input were regression parameters of an exponential function estimated by least squares, corresponding to the waveform envelope and the maximum value of the observed waveform for each component in one single station. A 10 fold Cross Validation was applied for a Normalized Polynomial Kernel obtaining the mean absolute error for different exponents and complexity parameters. Magnitude could be estimated with 0.16 of mean absolute error and the distance with an error of 7.5 km for distances within 60 to 120 km. This kind of algorithm is easy to implement in hardware and can be used directly in the field station to make possible the broadcast of estimations of this values to generate fast decisions at seismological control centers, increasing the possibility to have an effective reactiontribute and Descriptors calculator for SVR model training and test

  4. Fast Estimation of Epicentral Distance and Magnitude from a Single Three Component Seismic Station Using Machine Learning Techniques

    Science.gov (United States)

    Ochoa Gutierrez, L. H.; Niño, L. F.; Vargas-Jimenez, C. A.

    2013-05-01

    To minimize adverse effects originated by high magnitude earthquakes, early warning has become a powerful tool to anticipate a seismic wave arrival to an specific location, bringing opportune information to people and government agencies to initiate a fast response. To do this, a very fast and accurate characterization of the event must be done but this process is often made using seismograms recorded in at least four stations where processing time is usually greater than the wave arrival time to the interest area, mainly in seismological coarse networks. A faster process can be done if only one three component seismic station, the closest unsaturated station with respect to the epicenter, is used. Here, we present a Support Vector Regression algorithm which calculates Magnitude and Epicentral Distance using only five seconds of signal since P wave onset. This algorithm was trained with 36 records of historical earthquakes, where the input included regression parameters of an exponential function estimated by least squares, of the waveform envelope and the maximum value of the observed waveform for each component in a single station. A ten-fold Cross Validation was applied for a Normalized Polynomial Kernel obtaining the mean absolute error for different exponents and complexity parameters. The Magnitude could be estimated with 0.16 units of mean absolute error and the distance with an error of 7.5 km for distances within 60 to 120 km. This kind of algorithm is easy to implement in hardware and can be used directly in the field seismological sensor to make the broadcast of estimations of these values possible to generate fast decisions at seismological control centers, increasing the possibility of having an effective reaction.

  5. Teach it Yourself - Fast Modeling of Industrial Objects for 6D Pose Estimation

    DEFF Research Database (Denmark)

    Sølund, Thomas; Rajeeth Savarimuthu, Thiusius; Glent Buch, Anders

    2015-01-01

    In this paper, we present a vision system that allows a human to create new 3D models of novel industrial parts by placing the part in two different positions in the scene. The two shot modeling framework generates models with a precision that allows the model to be used for 6D pose estimation wi....... In addition, the models are applied in a pose estimation application, evaluated with 37 different scenes with 61 unique object poses. The pose estimation results show a mean translation error on 4.97 mm and a mean rotation error on 3.38 degrees....

  6. Evaluation Study of Fast Spectral Estimators Using In-vivo Data

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2009-01-01

    Spectrograms in medical ultrasound are usually estimated with Welch's method (WM). To achieve sufficient spectral resolution and contrast, WM uses an observation window (OW) of up to 256 emissions per estimate. Two adaptive filterbank methods have been suggested to reduce the OW: Blood spectral...... Power Capon (BPC) and the Blood Amplitude and Phase EStimation method (BAPES). Ten volunteers were scanned over the carotid artery. From each dataset, 28 spectrograms were produced by combining four approaches (WM with a Hanning window (W.HAN), WM with a boxcar window (W.BOX), BPC and BAPES) and seven...

  7. A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation

    Science.gov (United States)

    Li, Yue-e.; Wang, Qiang

    2011-11-01

    This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.

  8. The association between estimated average glucose levels and fasting plasma glucose levels in a rural tertiary care centre

    Directory of Open Access Journals (Sweden)

    Raja Reddy P

    2013-01-01

    Full Text Available The level of hemoglobin A1c (HbA1c, also known as glycated hemoglobin, determines how well a patient’s blood glucose level has been controlled over the previous 8-12 weeks. HbA1c levels help patients and doctors understand whether a particular diabetes treatment is working and whether adjustments need to be made to the treatment. Because the HbA1c level is a marker of blood glucose for the previous 60- 90 days, average blood glucose levels can be estimated using HbA1c levels. Aim in the present study was to investigate the relationship between estimated average glucose levels, as calculated by HbA1c levels, and fasting plasma glucose levels. Methods: Type 2 diabetes patients attending medicine outpatient department of RL Jalappa hospital, Kolar between March 2010 and July 2012 were taken. The estimated glucose levels (mg/dl were calculated using the following formula: 28.7 x HbA1c-46.7. Glucose levels were determined using the hexokinase method. HbA1c levels were determined using an HPLC method. Correlation and independent t- test was the test of significance for quantitative data. Results: A strong positive correlation between fasting plasma glucose level and estimated average blood glucose levels (r=0.54, p=0.0001 was observed. The difference was statistically significant. Conclusion: Reporting the estimated average glucose level together with the HbA1c level is believed to assist patients and doctors determine the effectiveness of blood glucose control measures.

  9. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    Science.gov (United States)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  10. A model-based adaptive state of charge estimator for a lithium-ion battery using an improved adaptive particle filter

    International Nuclear Information System (INIS)

    Ye, Min; Guo, Hui; Cao, Binggang

    2017-01-01

    Highlights: • Propose an improved adaptive particle swarm filter method. • The SoC estimation method for the battery based on the adaptive particle swarm filter is presented. • The algorithm is validated by the case study of different aged extent batteries. • The effectiveness and applicability of the algorithm are validated by the LiPB batteries. - Abstract: Obtaining accurate parameters, state of charge (SoC) and capacity of a lithium-ion battery is crucial for a battery management system, and establishing a battery model online is complex. In addition, the errors and perturbations of the battery model dramatically increase throughout the battery lifetime, making it more challenging to model the battery online. To overcome these difficulties, this paper provides three contributions: (1) To improve the robustness of the adaptive particle filter algorithm, an error analysis method is added to the traditional adaptive particle swarm algorithm. (2) An online adaptive SoC estimator based on the improved adaptive particle filter is presented; this estimator can eliminate the estimation error due to battery degradation and initial SoC errors. (3) The effectiveness of the proposed method is verified using various initial states of lithium nickel manganese cobalt oxide (NMC) cells and lithium-ion polymer (LiPB) batteries. The experimental analysis shows that the maximum errors are less than 1% for both the voltage and SoC estimations and that the convergence time of the SoC estimation decreased to 120 s.

  11. Differences in Gaussian diffusion tensor imaging and non-Gaussian diffusion kurtosis imaging model-based estimates of diffusion tensor invariants in the human brain.

    Science.gov (United States)

    Lanzafame, S; Giannelli, M; Garaci, F; Floris, R; Duggento, A; Guerrisi, M; Toschi, N

    2016-05-01

    An increasing number of studies have aimed to compare diffusion tensor imaging (DTI)-related parameters [e.g., mean diffusivity (MD), fractional anisotropy (FA), radial diffusivity (RD), and axial diffusivity (AD)] to complementary new indexes [e.g., mean kurtosis (MK)/radial kurtosis (RK)/axial kurtosis (AK)] derived through diffusion kurtosis imaging (DKI) in terms of their discriminative potential about tissue disease-related microstructural alterations. Given that the DTI and DKI models provide conceptually and quantitatively different estimates of the diffusion tensor, which can also depend on fitting routine, the aim of this study was to investigate model- and algorithm-dependent differences in MD/FA/RD/AD and anisotropy mode (MO) estimates in diffusion-weighted imaging of human brain white matter. The authors employed (a) data collected from 33 healthy subjects (20-59 yr, F: 15, M: 18) within the Human Connectome Project (HCP) on a customized 3 T scanner, and (b) data from 34 healthy subjects (26-61 yr, F: 5, M: 29) acquired on a clinical 3 T scanner. The DTI model was fitted to b-value =0 and b-value =1000 s/mm(2) data while the DKI model was fitted to data comprising b-value =0, 1000 and 3000/2500 s/mm(2) [for dataset (a)/(b), respectively] through nonlinear and weighted linear least squares algorithms. In addition to MK/RK/AK maps, MD/FA/MO/RD/AD maps were estimated from both models and both algorithms. Using tract-based spatial statistics, the authors tested the null hypothesis of zero difference between the two MD/FA/MO/RD/AD estimates in brain white matter for both datasets and both algorithms. DKI-derived MD/FA/RD/AD and MO estimates were significantly higher and lower, respectively, than corresponding DTI-derived estimates. All voxelwise differences extended over most of the white matter skeleton. Fractional differences between the two estimates [(DKI - DTI)/DTI] of most invariants were seen to vary with the invariant value itself as well as with MK

  12. The prevalence and incidence of active syphilis in women in Morocco, 1995-2016: Model-based estimation and implications for STI surveillance

    Science.gov (United States)

    Bennani, Aziza; El-Kettani, Amina; Hançali, Amina; El-Rhilani, Houssine; Alami, Kamal; Youbi, Mohamed; Rowley, Jane; Abu-Raddad, Laith; Smolak, Alex; Taylor, Melanie; Mahiané, Guy; Stover, John

    2017-01-01

    Background Evolving health priorities and resource constraints mean that countries require data on trends in sexually transmitted infections (STI) burden, to inform program planning and resource allocation. We applied the Spectrum STI estimation tool to estimate the prevalence and incidence of active syphilis in adult women in Morocco over 1995 to 2016. The results from the analysis are being used to inform Morocco’s national HIV/STI strategy, target setting and program evaluation. Methods Syphilis prevalence levels and trends were fitted through logistic regression to data from surveys in antenatal clinics, women attending family planning clinics and other general adult populations, as available post-1995. Prevalence data were adjusted for diagnostic test performance, and for the contribution of higher-risk populations not sampled in surveys. Incidence was inferred from prevalence by adjusting for the average duration of infection with active syphilis. Results In 2016, active syphilis prevalence was estimated to be 0.56% in women 15 to 49 years of age (95% confidence interval, CI: 0.3%-1.0%), and around 21,675 (10,612–37,198) new syphilis infections have occurred. The analysis shows a steady decline in prevalence from 1995, when the prevalence was estimated to be 1.8% (1.0–3.5%). The decline was consistent with decreasing prevalences observed in TB patients, fishermen and prisoners followed over 2000–2012 through sentinel surveillance, and with a decline since 2003 in national HIV incidence estimated earlier through independent modelling. Conclusions Periodic population-based surveys allowed Morocco to estimate syphilis prevalence and incidence trends. This first-ever undertaking engaged and focused national stakeholders, and confirmed the still considerable syphilis burden. The latest survey was done in 2012 and so the trends are relatively uncertain after 2012. From 2017 Morocco plans to implement a system to record data from routine antenatal

  13. The prevalence and incidence of active syphilis in women in Morocco, 1995-2016: Model-based estimation and implications for STI surveillance.

    Science.gov (United States)

    Bennani, Aziza; El-Kettani, Amina; Hançali, Amina; El-Rhilani, Houssine; Alami, Kamal; Youbi, Mohamed; Rowley, Jane; Abu-Raddad, Laith; Smolak, Alex; Taylor, Melanie; Mahiané, Guy; Stover, John; Korenromp, Eline L

    2017-01-01

    Evolving health priorities and resource constraints mean that countries require data on trends in sexually transmitted infections (STI) burden, to inform program planning and resource allocation. We applied the Spectrum STI estimation tool to estimate the prevalence and incidence of active syphilis in adult women in Morocco over 1995 to 2016. The results from the analysis are being used to inform Morocco's national HIV/STI strategy, target setting and program evaluation. Syphilis prevalence levels and trends were fitted through logistic regression to data from surveys in antenatal clinics, women attending family planning clinics and other general adult populations, as available post-1995. Prevalence data were adjusted for diagnostic test performance, and for the contribution of higher-risk populations not sampled in surveys. Incidence was inferred from prevalence by adjusting for the average duration of infection with active syphilis. In 2016, active syphilis prevalence was estimated to be 0.56% in women 15 to 49 years of age (95% confidence interval, CI: 0.3%-1.0%), and around 21,675 (10,612-37,198) new syphilis infections have occurred. The analysis shows a steady decline in prevalence from 1995, when the prevalence was estimated to be 1.8% (1.0-3.5%). The decline was consistent with decreasing prevalences observed in TB patients, fishermen and prisoners followed over 2000-2012 through sentinel surveillance, and with a decline since 2003 in national HIV incidence estimated earlier through independent modelling. Periodic population-based surveys allowed Morocco to estimate syphilis prevalence and incidence trends. This first-ever undertaking engaged and focused national stakeholders, and confirmed the still considerable syphilis burden. The latest survey was done in 2012 and so the trends are relatively uncertain after 2012. From 2017 Morocco plans to implement a system to record data from routine antenatal programmatic screening, which should help update and re

  14. Estimation of the yield of poplars in plantations of fast-growing species within current results

    Directory of Open Access Journals (Sweden)

    Martin Fajman

    2009-01-01

    Full Text Available Current results are presented of allometric yield estimates of the poplar short rotation coppice. According to a literature review it is obvious that yield estimates, based on measurable quantities of a growing stand, depend not only on the selected tree specie or its clone, but also on the site location. The Jap-105 poplar clone (P. nigra x P. maximowiczii allometric relations were analyzed by regression methods aimed at the creation of the yield estimation methodology at a testing site in Domanínek. Altogether, the twelve polynomial dependences of particular measured quantities approved the high empirical data conformity with the tested regression model (correlation index from 0.9033 to 0.9967. Within the forward stepwise regression, factors were selected, which explain best examined estimates of the total biomass DM; i.e. d.b.h. and stem height. Furthermore, the KESTEMONT’s (1971 mo­del was verified with a satisfying conformity as well. Approving presented yield estimation methods, the presented models will be checked in a large-scale field trial.

  15. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  16. Fast Spectral Velocity Estimation Using Adaptive Techniques: In-Vivo Results

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Udesen, Jesper

    2007-01-01

    Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window(OW) is very sbort. In this paper two adaptive techniques are tested and compared to the averaged perlodogram (Welch) for blood velocity estimation. The Blood Power...... the blood process over slow-time and averaging over depth to find the power spectral density estimate. In this paper, the two adaptive methods are explained, and performance Is assessed in controlled steady How experiments and in-vivo measurements. The three methods were tested on a circulating How rig...... with a blood mimicking fluid flowing in the tube. The scanning section is submerged in water to allow ultrasound data acquisition. Data was recorded using a BK8804 linear array transducer and the RASMUS ultrasound scanner. The controlled experiments showed that the OW could be significantly reduced when...

  17. In-vivo validation of fast spectral velocity estimation techniques – preliminary results

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2008-01-01

    Spectral Doppler is a common way to estimate blood velocities in medical ultrasound (US). The standard way of estimating spectrograms is by using Welch's method (WM). WM is dependent on a long observation window (OW) (about 100 transmissions) to produce spectrograms with sufficient spectral...... resolution and contrast. Two adaptive filterbank methods have been suggested to circumvent this problem: the Blood spectral Power Capon method (BPC) and the Blood Amplitude and Phase Estimation method (BAPES). Previously, simulations and flow rig experiments have indicated that the two adaptive methods can...... was scanned using the experimental ultrasound scanner RASMUS and a B-K Medical 5 MHz linear array transducer with an angle of insonation not exceeding 60deg. All 280 spectrograms were then randomised and presented to a radiologist blinded for method and OW for visual evaluation: useful or not useful. WMbw...

  18. Modifying the dissolved-in-water type natural gas field simulation model based on the distribution of estimated Young's modulus for the Kujukuri region, Japan

    Directory of Open Access Journals (Sweden)

    T. Nakagawa

    2015-11-01

    Full Text Available A simulation model, which covers the part of Southern-Kanto natural gas field in Chiba prefecture, was developed to perform studies and make predictions of land subsidence. However, because large differences between simulated and measured subsidence occurred in the northern modeled area of the gas field, the model was modified with an estimated Young's modulus distribution. This distribution was estimated by the yield value distribution and the correlation of yield value with Young's modulus. Consequently, the simulated subsidence in the north area was improved to some extent.

  19. A Fast Algorithm for Maximum Likelihood Estimation of Harmonic Chirp Parameters

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    . A statistically efficient estimator for extracting the parameters of the harmonic chirp model in additive white Gaussian noise is the maximum likelihood (ML) estimator which recently has been demonstrated to be robust to noise and accurate --- even when the model order is unknown. The main drawback of the ML......The analysis of (approximately) periodic signals is an important element in numerous applications. One generalization of standard periodic signals often occurring in practice are harmonic chirp signals where the instantaneous frequency increases/decreases linearly as a function of time...

  20. Core Power Control of the fast nuclear reactors with estimation of the delayed neutron precursor density using Sliding Mode method

    International Nuclear Information System (INIS)

    Ansarifar, G.R.; Nasrabadi, M.N.; Hassanvand, R.

    2016-01-01

    Highlights: • We present a S.M.C. system based on the S.M.O for control of a fast reactor power. • A S.M.O has been developed to estimate the density of delayed neutron precursor. • The stability analysis has been given by means Lyapunov approach. • The control system is guaranteed to be stable within a large range. • The comparison between S.M.C. and the conventional PID controller has been done. - Abstract: In this paper, a nonlinear controller using sliding mode method which is a robust nonlinear controller is designed to control a fast nuclear reactor. The reactor core is simulated based on the point kinetics equations and one delayed neutron group. Considering the limitations of the delayed neutron precursor density measurement, a sliding mode observer is designed to estimate it and finally a sliding mode control based on the sliding mode observer is presented. The stability analysis is given by means Lyapunov approach, thus the control system is guaranteed to be stable within a large range. Sliding Mode Control (SMC) is one of the robust and nonlinear methods which have several advantages such as robustness against matched external disturbances and parameter uncertainties. The employed method is easy to implement in practical applications and moreover, the sliding mode control exhibits the desired dynamic properties during the entire output-tracking process independent of perturbations. Simulation results are presented to demonstrate the effectiveness of the proposed controller in terms of performance, robustness and stability.

  1. In vitro quantification of the performance of model-based mono-planar and bi-planar fluoroscopy for 3D joint kinematics estimation.

    Science.gov (United States)

    Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita

    2013-03-01

    Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error bi-planar in quantifying the rotation along bone longitudinal axis (error bi-planar (error comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.

  2. The Finnish Diabetes Risk Score is associated with insulin resistance but not reduced beta-cell function, by classical and model-based estimates

    NARCIS (Netherlands)

    Brodovicz, K.G.; Dekker, J.M.; Rijkelijkhuizen, J.M.; Rhodes, T.; Mari, A.; Alssema, M.J.; Nijpels, G.; Williams-Herman, D.E.; Girman, C.J.

    2011-01-01

    Aims The Finnish Diabetes Risk Score (FINDRISC) is widely used for risk stratification in Type2 diabetes prevention programmes. Estimates of β-cell function vary widely in people without diabetes and reduced insulin secretion has been described in people at risk for diabetes. The aim of this

  3. Fast and unbiased estimator of the time-dependent Hurst exponent

    Science.gov (United States)

    Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria

    2018-03-01

    We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.

  4. Fast neutron reactor noise analysis: beginning failure detection and physical parameter estimation

    International Nuclear Information System (INIS)

    Le Guillou, G.

    1975-01-01

    The analysis of the signals fluctuations coming from a power nuclear reactor (a breeder), by correlation methods and spectral analysis has two principal applications: on line estimation of physical parameters (reactivity coefficients); beginning failures (little boiling, abnormal mechanic vibrations). These two applications give important informations to the reactor core control and permit a good diagnosis [fr

  5. Fast and nondestructive method for leaf level chlorophyll estimation using hyperspectral LiDAR

    NARCIS (Netherlands)

    Nevalainen, O.; Hakala, T.; Suomalainen, J.M.; Mäkipää, R.; Peltoniemi, M.; Krooks, A.; Kaasalainen, S.

    2014-01-01

    We propose an empirical method for nondestructive estimation of chlorophyll in tree canopies. The first prototype of a full waveform hyperspectral LiDAR instrument has been developed by the Finnish Geodetic Institute (FGI). The instrument efficiently combines the benefits of passive and active

  6. Fast estimation of space-robots inertia parameters: A modular mathematical formulation

    Science.gov (United States)

    Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2016-10-01

    This work aims to propose a new technique that considerably helps enhance time and precision needed to identify ;Inertia Parameters (IPs); of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), ;active space-debris removal; or ;automated in-orbit assemblies;. In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new ;modular formulation; has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a ;Modular Set; of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for ;simultaneous estimation processes; using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.

  7. Advancing Methods for Estimating Soil Nitrous Oxide Emissions by Incorporating Freeze-Thaw Cycles into a Tier 3 Model-Based Assessment

    Science.gov (United States)

    Ogle, S. M.; DelGrosso, S.; Parton, W. J.

    2017-12-01

    Soil nitrous oxide emissions from agricultural management are a key source of greenhouse gas emissions in many countries due to the widespread use of nitrogen fertilizers, manure amendments from livestock production, planting legumes and other practices that affect N dynamics in soils. In the United States, soil nitrous oxide emissions have ranged from 250 to 280 Tg CO2 equivalent from 1990 to 2015, with uncertainties around 20-30 percent. A Tier 3 method has been used to estimate the emissions with the DayCent ecosystem model. While the Tier 3 approach is considerably more accurate than IPCC Tier 1 methods, there is still the possibility of biases in emission estimates if there are processes and drivers that are not represented in the modeling framework. Furthermore, a key principle of IPCC guidance is that inventory compilers estimate emissions as accurately as possible. Freeze-thaw cycles and associated hot moments of nitrous oxide emissions are one of key drivers influencing emissions in colder climates, such as the cold temperate climates of the upper Midwest and New England regions of the United States. Freeze-thaw activity interacts with management practices that are increasing N availability in the plant-soil system, leading to greater nitrous oxide emissions during transition periods from winter to spring. Given the importance of this driver, the DayCent model has been revised to incorproate freeze-thaw cycles, and the results suggests that including this driver can significantly modify the emissions estimates in cold temperate climate regions. Consequently, future methodological development to improve estimation of nitrous oxide emissions from soils would benefit from incorporating freeze-thaw cycles into the modeling framework for national territories with a cold climate.

  8. Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents

    International Nuclear Information System (INIS)

    He, Bin; Frey, Eric C

    2006-01-01

    Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based

  9. Fast Estimation of Expected Information Gain for Bayesian Experimental Design Based on Laplace Approximation

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2014-01-01

    Shannon-type expected information gain is an important utility in evaluating the usefulness of a proposed experiment that involves uncertainty. Its estimation, however, cannot rely solely on Monte Carlo sampling methods, that are generally too computationally expensive for realistic physical models, especially for those involving the solution of stochastic partial differential equations. In this work we present a new methodology, based on the Laplace approximation of the posterior probability density function, to accelerate the estimation of expected information gain in the model parameters and predictive quantities of interest. Furthermore, in order to deal with the issue of dimensionality in a complex problem, we use sparse quadratures for the integration over the prior. We show the accuracy and efficiency of the proposed method via several nonlinear numerical examples, including a single parameter design of one dimensional cubic polynomial function and the current pattern for impedance tomography.

  10. Fast Kalman Filtering for Relative Spacecraft Position and Attitude Estimation for the Raven ISS Hosted Payload

    Science.gov (United States)

    Galante, Joseph M.; Van Eepoel, John; D'Souza, Chris; Patrick, Bryan

    2016-01-01

    The Raven ISS Hosted Payload will feature several pose measurement sensors on a pan/tilt gimbal which will be used to autonomously track resupply vehicles as they approach and depart the International Space Station. This paper discusses the derivation of a Relative Navigation Filter (RNF) to fuse measurements from the different pose measurement sensors to produce relative position and attitude estimates. The RNF relies on relative translation and orientation kinematics and careful pose sensor modeling to eliminate dependence on orbital position information and associated orbital dynamics models. The filter state is augmented with sensor biases to provide a mechanism for the filter to estimate and mitigate the offset between the measurements from different pose sensors

  11. Fast Estimation of Expected Information Gain for Bayesian Experimental Design Based on Laplace Approximation

    KAUST Repository

    Long, Quan

    2014-01-06

    Shannon-type expected information gain is an important utility in evaluating the usefulness of a proposed experiment that involves uncertainty. Its estimation, however, cannot rely solely on Monte Carlo sampling methods, that are generally too computationally expensive for realistic physical models, especially for those involving the solution of stochastic partial differential equations. In this work we present a new methodology, based on the Laplace approximation of the posterior probability density function, to accelerate the estimation of expected information gain in the model parameters and predictive quantities of interest. Furthermore, in order to deal with the issue of dimensionality in a complex problem, we use sparse quadratures for the integration over the prior. We show the accuracy and efficiency of the proposed method via several nonlinear numerical examples, including a single parameter design of one dimensional cubic polynomial function and the current pattern for impedance tomography.

  12. Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations.

    Science.gov (United States)

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn

    2015-05-15

    The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    Science.gov (United States)

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan

    2013-06-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  15. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2013-01-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  16. Computationally fast estimation of muscle tension for realtime bio-feedback.

    Science.gov (United States)

    Murai, Akihiko; Kurosaki, Kosuke; Yamane, Katsu; Nakamura, Yoshihiko

    2009-01-01

    In this paper, we propose a method for realtime estimation of whole-body muscle tensions. The main problem of muscle tension estimation is that there are infinite number of solutions to realize a particular joint torque due to the actuation redundancy. Numerical optimization techniques, e.g. quadratic programming, are often employed to obtain a unique solution, but they are usually computationally expensive. For example, our implementation of quadratic programming takes about 0.17 sec per frame on the musculoskeletal model with 274 elements, which is far from realtime computation. Here, we propose to reduce the computational cost by using EMG data and by reducing the number of unknowns in the optimization. First, we compute the tensions of muscles with surface EMG data based on a biological muscle data, which is a very efficient process. We also assume that their synergists have the same activity levels and compute their tensions with the same model. Tensions of the remaining muscles are then computed using quadratic programming, but the number of unknowns is significantly reduced by assuming that the muscles in the same heteronymous group have the same activity level. The proposed method realizes realtime estimation and visualization of the whole-body muscle tensions that can be applied to sports training and rehabilitation.

  17. A helium-3 proportional counter technique for estimating fast and intermediate neutrons

    International Nuclear Information System (INIS)

    Kosako, Toshiso; Nakazawa, Masaharu; Sekiguchi, Akira; Wakabayashi, Hiroaki.

    1976-11-01

    3 He proportional counter was employed to determine the fast and intermediate neutron spectra of wide energy region. The mixed gas ( 3 He, Kr) type counter response and the spectrum unfolding code were prepared and applied to some neutron fields. The counter response calculation was performed by using the Monte Carlo code, paying regards to dealing of the particle range calculation of the mixed gas. An experiment was carried out by using the van de Graaff accelerator to check the response function. The spectrum unfolding code was prepared so that it may have the function of automatic evaluation of the higher energy spectrum's effect to the pulse hight distribution of the lower energy region. The neutron spectra of the various neutron fields were measured and compared with the calculations such as the discrete ordinate Sn calculations. It became clear that the technique developed here can be applied to the practical use in the neutron energy range from about 150 KeV to 5 MeV. (auth.)

  18. Improved Atmospheric Correction Over the Indian Subcontinent Using Fast Radiative Transfer and Optimal Estimation

    Science.gov (United States)

    Natraj, V.; Thompson, D. R.; Mathur, A. K.; Babu, K. N.; Kindel, B. C.; Massie, S. T.; Green, R. O.; Bhattacharya, B. K.

    2017-12-01

    Remote Visible / ShortWave InfraRed (VSWIR) spectroscopy, typified by the Next-Generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG), is a powerful tool to map the composition, health, and biodiversity of Earth's terrestrial and aquatic ecosystems. These studies must first estimate surface reflectance, removing the atmospheric effects of absorption and scattering by water vapor and aerosols. Since atmospheric state varies spatiotemporally, and is insufficiently constrained by climatological models, it is important to estimate it directly from the VSWIR data. However, water vapor and aerosol estimation is a significant ongoing challenge for existing atmospheric correction models. Conventional VSWIR atmospheric correction methods evolved from multi-band approaches and do not fully utilize the rich spectroscopic data available. We use spectrally resolved (line-by-line) radiative transfer calculations, coupled with optimal estimation theory, to demonstrate improved accuracy of surface retrievals. These spectroscopic techniques are already pervasive in atmospheric remote sounding disciplines but have not yet been applied to imaging spectroscopy. Our analysis employs a variety of scenes from the recent AVIRIS-NG India campaign, which spans various climes, elevation changes, a wide range of biomes and diverse aerosol scenarios. A key aspect of our approach is joint estimation of surface and aerosol parameters, which allows assessment of aerosol distortion effects using spectral shapes across the entire measured interval from 380-2500 nm. We expect that this method would outperform band ratio approaches, and enable evaluation of subtle aerosol parameters where in situ reference data is not available, or for extreme aerosol loadings, as is observed in the India scenarios. The results are validated using existing in-situ reference spectra, reflectance measurements from assigned partners in India, and objective spectral quality metrics for scenes without any

  19. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  20. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    Science.gov (United States)

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of

  1. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece; Rashid, Mamoon; Pain, Arnab

    2012-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  2. A Novel OFDM Channel Estimation Algorithm with ICI Mitigation over Fast Fading Channels

    Directory of Open Access Journals (Sweden)

    C. Tao

    2010-06-01

    Full Text Available Orthogonal frequency-division multiplexing (OFDM is well-known as a high-bit-rate transmission technique, but the Doppler frequency offset due to the high speed movement destroys the orthogonality of the subcarriers resulting in the intercarrier interference (ICI, and degrades the performance of the system at the same time. In this paper a novel OFDM channel estimation algorithm with ICI mitigation based on the ICI self-cancellation scheme is proposed. With this method, a more accurate channel estimation is obtained by comb-type double pilots and then ICI coefficients can be obtained to mitigate the ICI on each subcarrier under the assumption that the channel impulse response (CIR varies in a linear fashion. The theoretical analysis and simulation results show that the bit error rate (BER and spectral efficiency performances are improved significantly under high-speed mobility conditions (350 km/h – 500 km/h in comparison to ZHAO’s ICI self-cancellation scheme.

  3. Gas-cooled fast-breeder reactor. Helium Circulator Test Facility updated design cost estimate

    International Nuclear Information System (INIS)

    1979-04-01

    Costs which are included in the cost estimate are: Titles I, II, and III Architect-Engineering Services; Titles I, II, and III General Atomic Services; site clearing, grading, and excavation; bulk materials and labor of installation; mechanical and electrical equipment with installation; allowance for contractors' overhead, profit, and insurance; escalation on materials and labor; a contingency; and installation of GAC supplied equipment and materials. The total estimated cost of the facility in As Spent Dollars is $27,700,000. Also included is a cost comparison of the updated design and the previous conceptual design. There would be a considerable penalty for the direct-cooled system over the indirect-cooled system due to the excessive cost of the large diameter helium loop piping to an outdoor heat exchanger. The indirect cooled system which utilizes a helium/Dowtherm G heat exchanger and correspondingly smaller and lower pressure piping to its outdoor air cooler proved to be the more economical of the two systems

  4. An extended role for thermoluminescent phosphors in personnel, environmental and accident dosimetry using sensitisation, re-estimation and fast fading

    International Nuclear Information System (INIS)

    Charles, M.W.

    1983-01-01

    This paper summarises some techniques for extending the usefulness of conventional phosphors in personnel, environmental and accident dosimetry. An optimised procedure for utilising radiation sensitisation and UV re-estimation in thermoluminescent LiF is presented. In particular it is shown that optimum performance is achieved by using a UV wavelength of 250 +- 10 nm for both the UV/thermal anneal following sensitisation, and for the UV re-estimation procedure. In the case of Harshaw Lif Chips (3x3x0.9 mm 3 ) the sensitivity is increased by a factor of 4-5 to achieve a minimum detectable dose of approx.=10 μGy (2sigma) and a minimum re-estimable dose of 50-100 mGy (2sigma), dependent on batch. Sensitized LiF also exhibits improved tissue equivalence, extended linearity and improved precision at low doses. The information from fast-fading glow peaks, which is normally rejected, is shown to have a useful application to the evaluation of short-term increases in environmental dose rates such as may occur following accidental releases of radioactivity. (orig.)

  5. Fast admixture analysis and population tree estimation for SNP and NGS data

    DEFF Research Database (Denmark)

    Cheng, Jade Yu; Mailund, Thomas; Nielsen, Rasmus

    2017-01-01

    assumption for identifying ancestry components correctly and for inferring the correct tree. In most cases, ancestry components are inferred correctly, although sample sizes and times since admixture can influence the results. We show that the popular Gaussian approximation tends to perform poorly under......-calling associated with Next Generation Sequencing (NGS) data. We also present a new method for estimating population trees from ancestry components using a Gaussian approximation. Using coalescence simulations of diverging populations, we explore the adequacy of the STRUCTURE-style models and the Gaussian...... extreme divergence scenarios e.g. with very long branch lengths, but the topologies of the population trees are accurately inferred in all scenarios explored. The new methods are implemented together with appropriate visualization tools in the software package Ohana....

  6. Estimated 55Mn and 90Zr cross section covariances in the fast neutron energy region

    International Nuclear Information System (INIS)

    Pigni, M.T.; Herman, M.; Oblozinsky, P.

    2008-01-01

    We completed estimates of neutron cross section covariances for 55 Mn and 90 Zr, from keV range to 25 MeV, considering the most important reaction channels, total, elastic, inelastic, capture, and (n,2n). The nuclear reaction model code EMPIRE was used to calculate sensitivity to model parameters by perturbation of parameters that define the optical model potential, nuclear level densities and strength of the pre-equilibrium emission. The sensitivity analysis was performed with the set of parameters which reproduces the ENDF/B-VII.0 cross sections. The experimental data were analyzed and both statistical and systematic uncertainties were extracted from almost 30 selected experiments. Then, the Bayesian code KALMAN was used to combine the sensitivity analysis and the experiments to obtain the evaluated covariance matrices

  7. A method for fast energy estimation and visualization of protein-ligand interaction

    Science.gov (United States)

    Tomioka, Nobuo; Itai, Akiko; Iitaka, Yoichi

    1987-10-01

    A new computational and graphical method for facilitating ligand-protein docking studies is developed on a three-dimensional computer graphics display. Various physical and chemical properties inside the ligand binding pocket of a receptor protein, whose structure is elucidated by X-ray crystal analysis, are calculated on three-dimensional grid points and are stored in advance. By utilizing those tabulated data, it is possible to estimate the non-bonded and electrostatic interaction energy and the number of possible hydrogen bonds between protein and ligand molecules in real time during an interactive docking operation. The method also provides a comprehensive visualization of the local environment inside the binding pocket. With this method, it becomes easier to find a roughly stable geometry of ligand molecules, and one can therefore make a rapid survey of the binding capability of many drug candidates. The method will be useful for drug design as well as for the examination of protein-ligand interactions.

  8. Estimated 55Mn and 90Zr cross section covariances in the fast neutron energy region

    Energy Technology Data Exchange (ETDEWEB)

    Pigni,M.T.; Herman, M.; Oblozinsky, P.

    2008-06-24

    We completed estimates of neutron cross section covariances for {sup 55}Mn and {sup 90}Zr, from keV range to 25 MeV, considering the most important reaction channels, total, elastic, inelastic, capture, and (n,2n). The nuclear reaction model code EMPIRE was used to calculate sensitivity to model parameters by perturbation of parameters that define the optical model potential, nuclear level densities and strength of the pre-equilibrium emission. The sensitivity analysis was performed with the set of parameters which reproduces the ENDF/B-VII.0 cross sections. The experimental data were analyzed and both statistical and systematic uncertainties were extracted from almost 30 selected experiments. Then, the Bayesian code KALMAN was used to combine the sensitivity analysis and the experiments to obtain the evaluated covariance matrices.

  9. Empirical and model-based estimates of spatial and temporal variations in net primary productivity in semi-arid grasslands of Northern China.

    Directory of Open Access Journals (Sweden)

    Shengwei Zhang

    Full Text Available Spatiotemporal variations in net primary productivity (NPP reflect the dynamics of water and carbon in the biosphere, and are often closely related to temperature and precipitation. We used the ecosystem model known as the Carnegie-Ames-Stanford Approach (CASA to estimate NPP of semiarid grassland in northern China counties between 2001 and 2013. Model estimates were strongly linearly correlated with observed values from different counties (slope = 0.76 (p < 0.001, intercept = 34.7 (p < 0.01, R2 = 0.67, RMSE = 35 g C·m-2·year-1, bias = -0.11 g C·m-2·year-1. We also quantified inter-annual changes in NPP over the 13-year study period. NPP varied between 141 and 313 g C·m-2·year-1, with a mean of 240 g C·m-2·year-1. NPP increased from west to east each year, and mean precipitation in each county was significantly positively correlated with NPP-annually, and in summer and autumn. Mean precipitation was positively related to NPP in spring, but not significantly so. Annual and summer temperatures were mostly negatively correlated with NPP, but temperature was positively correlated with spring and autumn NPP. Spatial correlation and partial correlation analyses at the pixel scale confirmed precipitation is a major driver of NPP. Temperature was negatively correlated with NPP in 99% of the regions at the annual scale, but after removing the effect of precipitation, temperature was positively correlated with the NPP in 77% of the regions. Our data show that temperature effects on production depend heavily on recent precipitation. Results reported here have significant and far-reaching implications for natural resource management, given the enormous size of these grasslands and the numbers of people dependent on them.

  10. A software sensor model based on hybrid fuzzy neural network for rapid estimation water quality in Guangzhou section of Pearl River, China.

    Science.gov (United States)

    Zhou, Chunshan; Zhang, Chao; Tian, Di; Wang, Ke; Huang, Mingzhi; Liu, Yanbiao

    2018-01-02

    In order to manage water resources, a software sensor model was designed to estimate water quality using a hybrid fuzzy neural network (FNN) in Guangzhou section of Pearl River, China. The software sensor system was composed of data storage module, fuzzy decision-making module, neural network module and fuzzy reasoning generator module. Fuzzy subtractive clustering was employed to capture the character of model, and optimize network architecture for enhancing network performance. The results indicate that, on basis of available on-line measured variables, the software sensor model can accurately predict water quality according to the relationship between chemical oxygen demand (COD) and dissolved oxygen (DO), pH and NH 4 + -N. Owing to its ability in recognizing time series patterns and non-linear characteristics, the software sensor-based FNN is obviously superior to the traditional neural network model, and its R (correlation coefficient), MAPE (mean absolute percentage error) and RMSE (root mean square error) are 0.8931, 10.9051 and 0.4634, respectively.

  11. An improved routine for the fast estimate of ion cyclotron heating efficiency in tokamak plasmas

    International Nuclear Information System (INIS)

    Brambilla, M.

    1992-02-01

    The subroutine ICEVAL for the rapid simulation of Ion Cyclotron Heating in tokamak plasmas is based on analytic estimates of the wave behaviour near resonances, and on drastic but reasonable simplifications of the real geometry. The subroutine has been rewritten to improve the model and to facilitate its use as input in transport codes. In the new version the influence of quasilinear minority heating on the damping efficiency is taken into account using the well-known Stix analytic approximation. Among other improvements are: a) the possibility of considering plasmas with more than two ion species; b) inclusion of Landau, Transit Time and collisional damping on the electrons non localised at resonances; c) better models for the antenna spectrum and for the construction of the power deposition profiles. The results of ICEVAL are compared in detail with those of the full-wave code FELICE for the case of Hydrogen minority heating in a Deuterium plasma; except for details which depend on the excitation of global eigenmodes, agreement is excellent. ICEVAL is also used to investigate the enhancement of the absorption efficiency due to quasilinear heating of the minority ions. The effect is a strongly non-linear function of the available power, and decreases rapidly with increasing concentration. For parameters typical of Asdex Upgrade plasmas, about 4 MW are required to produce a significant increase of the single-pass absorption at concentrations between 10 and 20%. (orig.)

  12. A fast and reliable method for simultaneous waveform, amplitude and latency estimation of single-trial EEG/MEG data.

    Directory of Open Access Journals (Sweden)

    Wouter D Weeda

    Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.

  13. Probabilistic Model-based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Anderson, Jakob; Prehn, Thomas

    2005-01-01

    is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  14. Error estimation and parameter dependence of the calculation of the fast ion distribution function, temperature, and density using data from the KF1 high energy neutral particle analyzer on Joint European Torus

    International Nuclear Information System (INIS)

    Schlatter, Christian; Testa, Duccio; Cecconello, Marco; Murari, Andrea; Santala, Marko

    2004-01-01

    Joint European Torus high energy neutral particle analyzer measures the flux of fast neutrals originating from the plasma core. From this data, the fast ion distribution function f i fast , temperature T i,perpendicular fast , and density n i fast are derived using knowledge of various plasma parameters and of the cross section for the required atomic processes. In this article, a systematic sensitivity study of the effect of uncertainties in these quantities on the evaluation of the neutral particle analyzer f i fast , T i,perpendicular fast , and n i fast is reported. The dominant parameter affecting n i fast is the impurity confinement time and therefore a reasonable estimate of this quantity is necessary to reduce the uncertainties in n i fast below 50%. On the other hand, T i,perpendicular fast is much less sensitive and can certainly be provided with an accuracy of better than 10%

  15. A fast position estimation method for a control rod guide tube inspection robot with a single camera

    International Nuclear Information System (INIS)

    Lee, Jae C.; Seop, Jun H.; Choi, Yu R.; Kim, Jae H.

    2004-01-01

    One of the problems in the inspection of control rod guide tubes using a mobile robot is accurate estimation of the robot's position. The problem is usually explained by the question 'Where am I?'. We can solve this question by a method called dead reckoning using odometers. But it has some inherent drawbacks such that the position error grows without bound unless an independent reference is used periodically to reduce the errors. In this paper, we presented one method to overcome this drawback by using a vision sensor. Our method is based on the classical Lucas Kanade algorithm for on image tracking. In this algorithm, an optical flow must be calculated at every image frame, thus it has intensive computing load. In order to handle large motions, it is preferable to use a large integration window. But a small integration window is more preferable to keep the details contained in the images. We used the robot's movement information obtained from the dead reckoning as an input parameter for the feature tracking algorithm in order to restrict the position of an integration window. By means of this method, we could reduce the size of an integration window without any loss of its ability to handle large motions and could avoid the trade off in the accuracy. And we could estimate the position of our robot relatively fast without on intensive computing time and the inherent drawbacks mentioned above. We studied this algorithm for applying it to the control rod guide tubes inspection robot and tried an inspection without on operator's intervention

  16. Fast and accurate phylogenetic reconstruction from high-resolution whole-genome data and a novel robustness estimator.

    Science.gov (United States)

    Lin, Y; Rajan, V; Moret, B M E

    2011-09-01

    The rapid accumulation of whole-genome data has renewed interest in the study of genomic rearrangements. Comparative genomics, evolutionary biology, and cancer research all require models and algorithms to elucidate the mechanisms, history, and consequences of these rearrangements. However, even simple models lead to NP-hard problems, particularly in the area of phylogenetic analysis. Current approaches are limited to small collections of genomes and low-resolution data (typically a few hundred syntenic blocks). Moreover, whereas phylogenetic analyses from sequence data are deemed incomplete unless bootstrapping scores (a measure of confidence) are given for each tree edge, no equivalent to bootstrapping exists for rearrangement-based phylogenetic analysis. We describe a fast and accurate algorithm for rearrangement analysis that scales up, in both time and accuracy, to modern high-resolution genomic data. We also describe a novel approach to estimate the robustness of results-an equivalent to the bootstrapping analysis used in sequence-based phylogenetic reconstruction. We present the results of extensive testing on both simulated and real data showing that our algorithm returns very accurate results, while scaling linearly with the size of the genomes and cubically with their number. We also present extensive experimental results showing that our approach to robustness testing provides excellent estimates of confidence, which, moreover, can be tuned to trade off thresholds between false positives and false negatives. Together, these two novel approaches enable us to attack heretofore intractable problems, such as phylogenetic inference for high-resolution vertebrate genomes, as we demonstrate on a set of six vertebrate genomes with 8,380 syntenic blocks. A copy of the software is available on demand.

  17. A deep learning approach to estimate stress distribution: a fast and accurate surrogate of finite-element analysis.

    Science.gov (United States)

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-01-01

    Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).

  18. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  19. On the problem of negative dissipation of fast waves at the fundamental ion cyclotron resonance and the accuracy of absorption estimates

    International Nuclear Information System (INIS)

    Castejon, F.; Pavlov, S.S.; Swanson, D. G.

    2002-01-01

    Negative dissipation appears when ion cyclotron resonance (ICR) heating at first harmonic in a thermal plasma is estimated using some numerical schemes. The causes of the appearance of such a problem are investigated analytically and numerically in this work showing that the problem is connected with the accuracy with which the absorption coefficient at the first ICR harmonic is estimated. The corrections for the absorption estimation are presented for the case of quasiperpendicular propagation of fast wave in this frequency range. A method to solve the problem of negative dissipation is presented and, as a result, an enhancement of absorption is found for reactor-size plasmas

  20. Quantitative estimation of the pathways followed in the conversion to glycogen of glucose administered to the fasted rat

    International Nuclear Information System (INIS)

    Scofield, R.F.; Kosugi, K.; Schumann, W.C.; Kumaran, K.; Landau, B.R.

    1985-01-01

    When [6- 3 H,6- 14 C]glucose was given in glucose loads to fasted rats, the average 3 H/ 14 C ratios in the glycogens deposited in their livers, relative to that in the glucoses administered, were 0.85 and 0.88. When [3- 3 H,3- 14 C]lactate was given in trace quantity along with unlabeled glucose loads, the average 3 H/ 14 C ratio in the glycogens deposited was 0.08. This indicates that a major fraction of the carbons of the glucose loads was converted to liver glycogen without first being converted to lactate. When [3- 3 H,6- 14 C]glucose was given in glucose loads, the 3 H/ 14 C ratios in the glycogens deposited averaged 0.44. This indicates that a significant amount of H bound to C-3, but not C-6, of glucose is removed within liver in the conversion of the carbons of the glucose to glycogen. This can occur in the pentose cycle and by cycling of glucose-6-P via triose phosphates. The contributions of these pathways were estimated by giving glucose loads labeled with [1- 14 C]glucose, [2- 14 C]glucose, [5- 14 C]glucose, and [6- 14 C]glucose and degrading the glucoses obtained by hydrolyzing the glycogens that deposited. Between 4 and 9% of the glucose utilized by the liver was utilized in the pentose cycle. While these are relatively small percentages a major portion of the difference between the ratios obtained with [3- 3 H]glucose and with [6- 3 H]glucose is attributable to metabolism in the pentose cycle

  1. Joint estimation of the fast and thermal components of a high neutron flux with a two on-line detector system

    International Nuclear Information System (INIS)

    Filliatre, P.; Oriol, L.; Jammes, C.; Vermeeren, L.

    2009-01-01

    A fission chamber with a 242 Pu deposit is the best suited detector for on-line measurements of the fast component of a high neutron flux (∼10 14 ncm -2 s -1 or more) with a significant thermal component. To get the fast flux, it is, however, necessary to subtract the contribution of the thermal neutrons, which increases with fluence because of the evolution of the isotopic content of the deposit. This paper presents an algorithm that permits, thanks to measurements provided by a 242 Pu fission chamber and a detector for thermal neutrons, to estimate the thermal and the fast flux at any time. An implementation allows to test it with simulated data.

  2. Joint estimation of the fast and thermal components of a high neutron flux with a two on-line detector system

    Energy Technology Data Exchange (ETDEWEB)

    Filliatre, P. [CEA, DEN, SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France); Laboratoire Commun d' Instrumentation CEA-SCK-CEN (France)], E-mail: philippe.filliatre@cea.fr; Oriol, L.; Jammes, C. [CEA, DEN, SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France); Laboratoire Commun d' Instrumentation CEA-SCK-CEN (France); Vermeeren, L. [SCK-CEN, Boeretang 200, B-2400 Mol (Belgium); Laboratoire Commun d' Instrumentation CEA-SCK-CEN (France)

    2009-05-21

    A fission chamber with a {sup 242}Pu deposit is the best suited detector for on-line measurements of the fast component of a high neutron flux ({approx}10{sup 14}ncm{sup -2}s{sup -1} or more) with a significant thermal component. To get the fast flux, it is, however, necessary to subtract the contribution of the thermal neutrons, which increases with fluence because of the evolution of the isotopic content of the deposit. This paper presents an algorithm that permits, thanks to measurements provided by a {sup 242}Pu fission chamber and a detector for thermal neutrons, to estimate the thermal and the fast flux at any time. An implementation allows to test it with simulated data.

  3. Estimation of Antarctic Land-Fast Sea Ice Algal Biomass and Snow Thickness From Under-Ice Radiance Spectra in Two Contrasting Areas

    Science.gov (United States)

    Wongpan, P.; Meiners, K. M.; Langhorne, P. J.; Heil, P.; Smith, I. J.; Leonard, G. H.; Massom, R. A.; Clementson, L. A.; Haskell, T. G.

    2018-03-01

    Fast ice is an important component of Antarctic coastal marine ecosystems, providing a prolific habitat for ice algal communities. This work examines the relationships between normalized difference indices (NDI) calculated from under-ice radiance measurements and sea ice algal biomass and snow thickness for Antarctic fast ice. While this technique has been calibrated to assess biomass in Arctic fast ice and pack ice, as well as Antarctic pack ice, relationships are currently lacking for Antarctic fast ice characterized by bottom ice algae communities with high algal biomass. We analyze measurements along transects at two contrasting Antarctic fast ice sites in terms of platelet ice presence: near and distant from an ice shelf, i.e., in McMurdo Sound and off Davis Station, respectively. Snow and ice thickness, and ice salinity and temperature measurements support our paired in situ optical and biological measurements. Analyses show that NDI wavelength pairs near the first chlorophyll a (chl a) absorption peak (≈440 nm) explain up to 70% of the total variability in algal biomass. Eighty-eight percent of snow thickness variability is explained using an NDI with a wavelength pair of 648 and 567 nm. Accounting for pigment packaging effects by including the ratio of chl a-specific absorption coefficients improved the NDI-based algal biomass estimation only slightly. Our new observation-based algorithms can be used to estimate Antarctic fast ice algal biomass and snow thickness noninvasively, for example, by using moored sensors (time series) or mapping their spatial distributions using underwater vehicles.

  4. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  5. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  6. Dependability estimation for non-Markov consecutive-k-out-of-n: F repairable systems by fast simulation

    International Nuclear Information System (INIS)

    Xiao Gang; Li Zhizhong; Li Ting

    2007-01-01

    A model of consecutive-k-out-of-n: F repairable system with non-exponential repair time distribution and (k-1)-step Markov dependence is introduced in this paper along with algorithms of three Monte Carlo methods, i.e. importance sampling, conditional expectation estimation and combination of the two methods, to estimate dependability of the non-Markov model including reliability, transient unavailability, MTTF, and MTBF. A numerical example is presented to demonstrate the efficiencies of above methods. The results show that combinational method has the highest efficiency for estimation of unreliability and unavailability, while conditional expectation estimation is the most efficient method for estimation of MTTF and MTBF. Conditional expectation estimation seems to have overall higher speedups in estimating dependability of such systems

  7. Bioequivalence of two lansoprazole delayed release capsules 30 mg in healthy male volunteers under fasting, fed and fasting-applesauce conditions: a partial replicate crossover study design to estimate the pharmacokinetics of highly variable drugs.

    Science.gov (United States)

    Thota, S; Khan, S M; Tippabhotla, S K; Battula, R; Gadiko, C; Vobalaboina, V

    2013-11-01

    An open-label, 2-treatment, 3-sequence, 3-period, single-dose, partial replicate crossover studies under fasting (n=48), fed (n=60) and fasting-applesauce (n=48) (sprinkled on one table spoonful of applesauce) modalities were conducted in healthy adult male volunteers to evaluate bioequivalence between 2 formulations of lansoprazole delayed release capsules 30 mg. In all the 3 studies, as per randomization, either test or reference formulations were administered in a crossover manner with a required washout period of at least 7 days. Blood samples were collected adequately (0-24 h) to determine lansoprazole plasma concentrations using a validated LC-MS/MS analytical method. To characterize the pharmacokinetic parameters (Cmax, AUC0-t, AUC0-∞, Tmax, Kel and T1/2) of lansoprazole, non-compartmental analysis and ANOVA was applied on ln-transformed values. The bioequivalence was tested based on within-subject variability of the reference formulation. In fasting and fed studies (within-subject variability>30%) bioequivalence was evaluated with scaled average bioequivalence, hence for the pharmacokinetic parameters Cmax, AUC0-t and AUC0-∞, the 95% upper confidence bound for (μT-μR)2-θσ2 WR was ≤0, and the point estimates (test-to-reference ratio) were within the regulatory acceptance limit 80.00-125.00%. In fasting-applesauce study (within-subject variability<30%) bioequivalence was evaluated with average bioequivalence, the 90% CI of ln-transformed data of Cmax, AUC0-t and AUC0-∞ were within the regulatory acceptance limit 80.00-125.00%. Based on these aforesaid statistical inferences, it was concluded that the test formulation is bioequivalent to reference formulation. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Coastal Amplification Laws for the French Tsunami Warning Center: Numerical Modeling and Fast Estimate of Tsunami Wave Heights Along the French Riviera

    Science.gov (United States)

    Gailler, A.; Hébert, H.; Schindelé, F.; Reymond, D.

    2018-04-01

    Tsunami modeling tools in the French tsunami Warning Center operational context provide rapidly derived warning levels with a dimensionless variable at basin scale. A new forecast method based on coastal amplification laws has been tested to estimate the tsunami onshore height, with a focus on the French Riviera test-site (Nice area). This fast prediction tool provides a coastal tsunami height distribution, calculated from the numerical simulation of the deep ocean tsunami amplitude and using a transfer function derived from the Green's law. Due to a lack of tsunami observations in the western Mediterranean basin, coastal amplification parameters are here defined regarding high resolution nested grids simulations. The preliminary results for the Nice test site on the basis of nine historical and synthetic sources show a good agreement with the time-consuming high resolution modeling: the linear approximation is obtained within 1 min in general and provides estimates within a factor of two in amplitude, although the resonance effects in harbors and bays are not reproduced. In Nice harbor especially, variation in tsunami amplitude is something that cannot be really assessed because of the magnitude range and maximum energy azimuth of possible events to account for. However, this method is well suited for a fast first estimate of the coastal tsunami threat forecast.

  9. Estimating neural background input with controlled and fast perturbations: A bandwidth comparison between inhibitory opsins and neural circuits

    Directory of Open Access Journals (Sweden)

    David Eriksson

    2016-08-01

    Full Text Available To test the importance of a certain cell type or brain area it is common to make a lack of function experiment in which the neuronal population of interest is inhibited. Here we review physiological and methodological constraints for making controlled perturbations using the corticothalamic circuit as an example. The brain with its many types of cells and rich interconnectivity offers many paths through which a perturbation can spread within a short time. To understand the side effects of the perturbation one should record from those paths. We find that ephaptic effects, gap-junctions, and fast chemical synapses are so fast that they can react to the perturbation during the few milliseconds it takes for an opsin to change the membrane potential. The slow chemical synapses, astrocytes, extracellular ions and vascular signals, will continue to give their physiological input for around 20 milliseconds before they also react to the perturbation. Although we show that some pathways can react within milliseconds the strength/speed reported in this review should be seen as an upper bound since we have omitted how polysynaptic signals are attenuated. Thus the number of additional recordings that has to be made to control for the perturbation side effects is expected to be fewer than proposed here. To summarize, the reviewed literature not only suggests that it is possible to make controlled lack of function experiments, but, it also suggests that such a lack of function experiment can be used to measure the context of local neural computations.

  10. A fast and simple method to estimate relative, hyphal tensile-strength of filamentous fungi used to assess the effect of autophagy

    DEFF Research Database (Denmark)

    Quintanilla, Daniela; Chelius, Cynthia; Iambamrung, Sirasa

    2018-01-01

    Fungal hyphal strength is an important phenotype which can have a profound impact on bioprocess behavior. Until now, there is not an efficient method which allows its characterization. Currently available methods are very time consuming; thus, compromising their applicability in strain selection...... and process development. To overcome this issue, a method for fast and easy, statistically-verified quantification of relative hyphal tensile strength was developed. It involves off-line fragmentation in a high shear mixer followed by quantification of fragment size using laser diffraction. Particle size...... distribution (PSD) is determined, with analysis time on the order of minutes. Plots of PSD 90th percentile versus time allow estimation of the specific fragmentation rate. This novel method is demonstrated by estimating relative hyphal strength during growth in control conditions and rapamycin...

  11. Fast joint detection-estimation of evoked brain activity in event-related FMRI using a variational approach

    Science.gov (United States)

    Chaari, Lotfi; Vincent, Thomas; Forbes, Florence; Dojat, Michel; Ciuciu, Philippe

    2013-01-01

    In standard within-subject analyses of event-related fMRI data, two steps are usually performed separately: detection of brain activity and estimation of the hemodynamic response. Because these two steps are inherently linked, we adopt the so-called region-based Joint Detection-Estimation (JDE) framework that addresses this joint issue using a multivariate inference for detection and estimation. JDE is built by making use of a regional bilinear generative model of the BOLD response and constraining the parameter estimation by physiological priors using temporal and spatial information in a Markovian model. In contrast to previous works that use Markov Chain Monte Carlo (MCMC) techniques to sample the resulting intractable posterior distribution, we recast the JDE into a missing data framework and derive a Variational Expectation-Maximization (VEM) algorithm for its inference. A variational approximation is used to approximate the Markovian model in the unsupervised spatially adaptive JDE inference, which allows automatic fine-tuning of spatial regularization parameters. It provides a new algorithm that exhibits interesting properties in terms of estimation error and computational cost compared to the previously used MCMC-based approach. Experiments on artificial and real data show that VEM-JDE is robust to model mis-specification and provides computational gain while maintaining good performance in terms of activation detection and hemodynamic shape recovery. PMID:23096056

  12. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  13. Evaluation of fasting plasma insulin concentration as an estimate of insulin action in nondiabetic individuals: comparison with the homeostasis model assessment of insulin resistance (HOMA-IR).

    Science.gov (United States)

    Abbasi, Fahim; Okeke, QueenDenise; Reaven, Gerald M

    2014-04-01

    Insulin-mediated glucose disposal varies severalfold in apparently healthy individuals, and approximately one-third of the most insulin resistant of these individuals is at increased risk to develop various adverse clinical syndromes. Since direct measurements of insulin sensitivity are not practical in a clinical setting, several surrogate estimates of insulin action have been proposed, including fasting plasma insulin (FPI) concentration and the homeostasis model assessment of insulin resistance (HOMA-IR) calculated by a formula employing fasting plasma glucose (FPG) and FPI concentrations. The objective of this study was to compare FPI as an estimate of insulin-mediated glucose disposal with values generated by HOMA-IR in 758 apparently healthy nondiabetic individuals. Measurements were made of FPG, FPI, triglyceride (TG), and high-density lipoprotein cholesterol (HDL-C) concentrations, and insulin-mediated glucose uptake was quantified by determining steady-state plasma glucose (SSPG) concentration during the insulin suppression test. FPI and HOMA-IR were highly correlated (r = 0.98, P HOMA-IR (r = 0.64). Furthermore, the relationship between FPI and TG (r = 0.35) and HDL-C (r = -0.40) was comparable to that between HOMA-IR and TG (r = 0.39) and HDL-C (r = -0.41). In conclusion, FPI and HOMA-IR are highly correlated in nondiabetic individuals, with each estimate accounting for ~40% of the variability (variance) in a direct measure of insulin-mediated glucose disposal. Calculation of HOMA-IR does not provide a better surrogate estimate of insulin action, or of its associated dyslipidemia, than measurement of FPI.

  14. Topological analysis of polymeric melts: chain-length effects and fast-converging estimators for entanglement length.

    Science.gov (United States)

    Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin

    2009-09-01

    Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.

  15. Summary of estimated doses and risks resulting from routine radionuclide releases from fast breeder reactor fuel cycle facilities

    International Nuclear Information System (INIS)

    Miller, C.W.; Meyer, H.R.

    1985-01-01

    A project is underway at Oak Ridge National Laboratory to assess the human health and environment effects associated with operation of Liquid Metal Fast Breeder Reactor fuel cycle. In this first phase of the work, emphasis was focused on routine radionuclide releases from reactor and reprocessing facilities. For this study, sites for fifty 1-GW(e) capacity reactors and three reprocessing plants were selected to develop scenarios representative of US power requirements. For both the reactor and reprocessing facility siting schemes selected, relatively small impacts were calculated for locality-specific populations residing within 100 km. Also, the results of these analyses are being used in the identification of research priorities. 13 refs., 2 figs., 3 tabs

  16. Gas Cooled Fast Breeder Reactor cost estimate for a circulator test facility (modified HTGR circulator test facility)

    International Nuclear Information System (INIS)

    1979-10-01

    This is a conceptual design cost estimate for a Helium Circulator Test Facility to be located at the General Atomic Company, San Diego, California. The circulator, drive motors, controllers, thermal barrier, and circulator service module installation costs are part of the construction cost included

  17. Complexity Control of Fast Motion Estimation in H.264/MPEG-4 AVC with Rate-Distortion-Complexity optimization

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren; Aghito, Shankar Manuel

    2007-01-01

    A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the pa...... statistics and a control scheme. The algorithm also works well for scene change condition. Test results for coding interlaced video (720x576 PAL) are reported.......A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the past...

  18. Model-dependent estimate on the connection between fast radio bursts and ultra high energy cosmic rays

    International Nuclear Information System (INIS)

    Li, Xiang; Zhou, Bei; He, Hao-Ning; Fan, Yi-Zhong; Wei, Da-Ming

    2014-01-01

    The existence of fast radio bursts (FRBs), a new type of extragalatic transient, has recently been established, and quite a few models have been proposed. In this work, we discuss the possible connection between the FRB sources and ultra high energy (>10 18 eV) cosmic rays. We show that in the blitzar model and the model of merging binary neutron stars, which includes the huge energy release of each FRB central engine together with the rather high rate of FRBs, the accelerated EeV cosmic rays may contribute significantly to the observed ones. In other FRB models, including, for example, the merger of double white dwarfs and the energetic magnetar radio flares, no significant EeV cosmic ray is expected. We also suggest that the mergers of double neutron stars, even if they are irrelevant to FRBs, may play a nonignorable role in producing EeV cosmic ray protons if supramassive neutron stars are formed in a sufficient fraction of mergers and the merger rate is ≳ 10 3 yr –1 Gpc –3 . Such a possibility will be unambiguously tested in the era of gravitational wave astronomy.

  19. New Method To Estimate Total Polyphenol Excretion: Comparison of Fast Blue BB versus Folin-Ciocalteu Performance in Urine.

    Science.gov (United States)

    Hinojosa-Nogueira, Daniel; Muros, Joaquín; Rufián-Henares, José A; Pastoriza, Silvia

    2017-05-24

    Polyphenols are bioactive substances of vegetal origin with a significant impact on human health. The assessment of polyphenol intake and excretion is therefore important. The Folin-Ciocalteu (F-C) method is the reference assay to measure polyphenols in foods as well as their excretion in urine. However, many substances can influence the method, making it necessary to conduct a prior cleanup using solid-phase extraction (SPE) cartridges. In this paper, we demonstrate the use of the Fast Blue BB reagent (FBBB) as a new tool to measure the excretion of polyphenols in urine. Contrary to F-C, FBBB showed no interference in urine, negating the time-consuming and costly SPE cleanup. In addition, it showed excellent linearity (r 2 = 0.9997), with a recovery of 96.4% and a precision of 1.86-2.11%. The FBBB method was validated to measure the excretion of polyphenols in spot urine samples from Spanish children, showing a good correlation between polyphenol intake and excretion.

  20. Digital photography provides a fast, reliable, and noninvasive method to estimate anthocyanin pigment concentration in reproductive and vegetative plant tissues.

    Science.gov (United States)

    Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo

    2018-03-01

    Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the

  1. Estimation of time-variable fast flow path chemical concentrations for application in tracer-based hydrograph separation analyses

    Science.gov (United States)

    Kronholm, Scott C.; Capel, Paul D.

    2016-01-01

    Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.

  2. Min-max Extrapolation Scheme for Fast Estimation of 3D Potts Field Partition Functions. Application to the Joint Detection-Estimation of Brain Activity in fMRI

    International Nuclear Information System (INIS)

    Risser, L.; Vincent, T.; Ciuciu, P.; Risser, L.; Idier, J.; Risser, L.; Forbes, F.

    2011-01-01

    In this paper, we propose a fast numerical scheme to estimate Partition Functions (PF) of symmetric Potts fields. Our strategy is first validated on 2D two-color Potts fields and then on 3D two- and three-color Potts fields. It is then applied to the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated, deactivated and inactivated brain regions and to estimate region dependent hemodynamic filters. For any brain region, a specific 3D Potts field indeed embodies the spatial correlation over the hidden states of the voxels by modeling whether they are activated, deactivated or inactive. To make spatial regularization adaptive, the PFs of the Potts fields over all brain regions are computed prior to the brain activity estimation. Our approach is first based upon a classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, we propose an extrapolation method that allows us to approximate the PFs associated to the Potts fields defined over the remaining brain regions. In comparison with preexisting methods either based on a path sampling strategy or mean-field approximations, our contribution strongly alleviates the computational cost and makes spatially adaptive regularization of whole brain fMRI datasets feasible. It is also robust against grid inhomogeneities and efficient irrespective of the topological configurations of the brain regions. (authors)

  3. Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling

    Directory of Open Access Journals (Sweden)

    Junqing Ma

    2013-05-01

    Full Text Available Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency.

  4. A method for the fast estimation of a battery entropy-variation high-resolution curve - Application on a commercial LiFePO4/graphite cell

    Science.gov (United States)

    Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy

    2016-11-01

    The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.

  5. Nonlinear Model-Based Fault Detection for a Hydraulic Actuator

    NARCIS (Netherlands)

    Van Eykeren, L.; Chu, Q.P.

    2011-01-01

    This paper presents a model-based fault detection algorithm for a specific fault scenario of the ADDSAFE project. The fault considered is the disconnection of a control surface from its hydraulic actuator. Detecting this type of fault as fast as possible helps to operate an aircraft more cost

  6. Age estimation of juvenile European hake Merluccius merluccius based on otolith microstructure analysis: a slow or fast growth pattern?

    Science.gov (United States)

    Pattoura, P; Lefkaditou, E; Megalofonou, P

    2015-03-01

    The main goal of this study was to examine otolith microstructure and to estimate the age and growth of European hake Merluccius merluccius from the eastern Mediterranean Sea. One hundred and twenty-nine specimens ranging from 102 to 438 mm in total length (LT ) were used. Age estimations were based on the study of the otolith microstructure, which was revealed after grinding both frontal sides of otoliths. The enumerations of the daily growth increments (DGI) as well as their width (WDGI ) measurements were made on calibrated digital images. The number of DGI in otoliths ranged between 163 and 717. Four phases in the WDGI evolution were distinguished: (1) larval-juvenile pelagic phase, with an increasing trend in WDGI up to the 60th DGI, (2) settlement phase, with a short-term deceleration in WDGI between the 61st and 150th DGI, (3) juvenile demersal phase, characterized by a stabilization of WDGI from 151st to 400th DGI and (4) adult phase, with a decreasing trend in WDGI after the 400th DGI. Age, sex and month of formation were found to affect the WDGI in all phases, with the exception of age at the juvenile demersal phase. The power curve with intercept model described best the relationship of M. merluccius LT with age (TDGI ), according to Akaike criteria, revealing differences in growth between females [LT = 65 · 36(TDGI )(0 · 40) - 388 · 55] and males [LT = 69 · 32(TDGI )(0 · 37) - 352 · 88] for the sizes examined. The mean daily growth rates were 0·61 mm day(-1) for females and 0·52 mm day(-1) for males, resulting in an LT of 283 and 265 mm at the end of their first year of life. In comparison with previous studies on the Mediterranean Sea, the results of this study showed a greater growth rate, similar to results from tagging experiments and otolith microstructure analyses for M. merluccius in other geographic areas. © 2014 The Fisheries Society of the British Isles.

  7. Test methods for estimating the efficacy of the fast-acting disinfectant peracetic acid on surfaces of personal protective equipment.

    Science.gov (United States)

    Lemmer, K; Howaldt, S; Heinrich, R; Roder, A; Pauli, G; Dorner, B G; Pauly, D; Mielke, M; Schwebke, I; Grunow, R

    2017-11-01

    The work aimed at developing and evaluating practically relevant methods for testing of disinfectants on contaminated personal protective equipment (PPE). Carriers were prepared from PPE fabrics and contaminated with Bacillus subtilis spores. Peracetic acid (PAA) was applied as a suitable disinfectant. In method 1, the contaminated carrier was submerged in PAA solution; in method 2, the contaminated area was covered with PAA; and in method 3, PAA, preferentially combined with a surfactant, was dispersed as a thin layer. In each method, 0·5-1% PAA reduced the viability of spores by a factor of ≥6 log 10 within 3 min. The technique of the most realistic method 3 proved to be effective at low temperatures and also with a high organic load. Vaccinia virus and Adenovirus were inactivated with 0·05-0·1% PAA by up to ≥6 log 10 within 1 min. The cytotoxicity of ricin was considerably reduced by 2% PAA within 15 min of exposure. PAA/detergent mixture enabled to cover hydrophobic PPE surfaces with a thin and yet effective disinfectant layer. The test methods are objective tools for estimating the biocidal efficacy of disinfectants on hydrophobic flexible surfaces. © 2017 The Society for Applied Microbiology.

  8. A fast-reliable methodology to estimate the concentration of rutile or anatase phases of TiO2

    Directory of Open Access Journals (Sweden)

    A. R. Zanatta

    2017-07-01

    Full Text Available Titanium-dioxide (TiO2 is a low-cost, chemically inert material that became the basis of many modern applications ranging from, for example, cosmetics to photovoltaics. TiO2 exists in three different crystal phases − Rutile, Anatase and, less commonly, Brookite − and, in most of the cases, the presence or relative amount of these phases are essential to decide the TiO2 final application and its related efficiency. Traditionally, X-ray diffraction has been chosen to study TiO2 and provides both the phases identification and the Rutile-to-Anatase ratio. Similar information can be achieved from Raman scattering spectroscopy that, additionally, is versatile and involves rather simple instrumentation. Motivated by these aspects this work took into account various TiO2 Rutile+Anatase powder mixtures and their corresponding Raman spectra. Essentially, the method described here was based upon the fact that the Rutile and Anatase crystal phases have distinctive phonon features, and therefore, the composition of the TiO2 mixtures can be readily assessed from their Raman spectra. The experimental results clearly demonstrate the suitability of Raman spectroscopy in estimating the concentration of Rutile or Anatase in TiO2 and is expected to influence the study of TiO2-related thin films, interfaces, systems with reduced dimensions, and devices like photocatalytic and solar cells.

  9. Use of a soft sensor for the fast estimation of dried cake resistance during a freeze-drying cycle.

    Science.gov (United States)

    Bosca, Serena; Barresi, Antonello A; Fissore, Davide

    2013-07-15

    This paper deals with the determination of dried cake resistance in a freeze-drying process using the Smart Soft Sensor, a process analytical technology recently proposed by the authors to monitor the primary drying stage of a freeze-drying process. This sensor uses the measurement of product temperature, a mathematical model of the process, and the Kalman filter algorithm to estimate the residual amount of ice in the vial as a function of time, as well as the coefficient of heat transfer between the shelf and the product and the resistance of the dried cake to vapor flow. It does not require expensive (additional) hardware in a freeze-dryer, provided that thermocouples are available. At first, the effect of the insertion of the thermocouple in a vial on the structure of the product is investigated by means of experimental tests, comparing both sublimation rate and cake structure in vials with and without thermocouple. This is required to assess that the temperature measured by the thermocouple is the same of the product in the non-monitored vials, at least in a non-GMP environment, or when controlled nucleation methods are used. Then, results about cake resistance obtained in an extended experimental campaign with aqueous solutions containing different excipients (sucrose, mannitol and polyvinylpyrrolidone), processed in various operating conditions, are presented, with the goal to point out the accuracy of the proposed methodology. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Model-Based Reasoning in Humans Becomes Automatic with Training.

    Directory of Open Access Journals (Sweden)

    Marcos Economides

    2015-09-01

    Full Text Available Model-based and model-free reinforcement learning (RL have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  11. A Comparative Study Based on the Least Square Parameter Identification Method for State of Charge Estimation of a LiFePO4 Battery Pack Using Three Model-Based Algorithms for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Taimoor Zahid

    2016-09-01

    Full Text Available Battery energy storage management for electric vehicles (EV and hybrid EV is the most critical and enabling technology since the dawn of electric vehicle commercialization. A battery system is a complex electrochemical phenomenon whose performance degrades with age and the existence of varying material design. Moreover, it is very tedious and computationally very complex to monitor and control the internal state of a battery’s electrochemical systems. For Thevenin battery model we established a state-space model which had the advantage of simplicity and could be easily implemented and then applied the least square method to identify the battery model parameters. However, accurate state of charge (SoC estimation of a battery, which depends not only on the battery model but also on highly accurate and efficient algorithms, is considered one of the most vital and critical issue for the energy management and power distribution control of EV. In this paper three different estimation methods, i.e., extended Kalman filter (EKF, particle filter (PF and unscented Kalman Filter (UKF, are presented to estimate the SoC of LiFePO4 batteries for an electric vehicle. Battery’s experimental data, current and voltage, are analyzed to identify the Thevenin equivalent model parameters. Using different open circuit voltages the SoC is estimated and compared with respect to the estimation accuracy and initialization error recovery. The experimental results showed that these online SoC estimation methods in combination with different open circuit voltage-state of charge (OCV-SoC curves can effectively limit the error, thus guaranteeing the accuracy and robustness.

  12. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    International Nuclear Information System (INIS)

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated

  13. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  14. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...

  15. Model-based security testing

    OpenAIRE

    Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...

  16. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  17. Model-based design languages: A case study

    OpenAIRE

    Cibrario Bertolotti, Ivan; Hu, Tingting; Navet, Nicolas

    2017-01-01

    Fast-paced innovation in the embedded systems domain puts an ever increasing pressure on effective software development methods, leading to the growing popularity of Model-Based Design (MBD). In this context, a proper choice of modeling languages and related tools - depending on design goals and problem qualities - is crucial to make the most of MBD benefits. In this paper, a comparison between two dissimilar approaches to modeling is carried out, with the goal of highlighting their relative ...

  18. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  19. Federal Air Pollutant Emission Regulations and Preliminary Estimates of Potential-to-Emit from Biorefineries, Pathway #2: Conversion of Lignocellulosic Biomass to Hydrocarbon Fuels: Fast Pyrolysis and Hydrotreating Bio-oil Pathway

    Energy Technology Data Exchange (ETDEWEB)

    Bhatt, Arpit [National Renewable Energy Lab. (NREL), Golden, CO (United States). Strategic Energy Analysis Center. Technology Systems and Sustainability Analysis Group; Zhang, Yimin [National Renewable Energy Lab. (NREL), Golden, CO (United States). Strategic Energy Analysis Center. Technology Systems and Sustainability Analysis Group; Heath, Garvin [National Renewable Energy Lab. (NREL), Golden, CO (United States). Strategic Energy Analysis Center. Technology Systems and Sustainability Analysis Group; Thomas, Mae [Eastern Research Group, Research Triangle Park, NC (United States); Renzaglia, Jason [Eastern Research Group, Research Triangle Park, NC (United States)

    2017-01-01

    Biorefineries are subject to environmental laws, including complex air quality regulations that aim to protect and improve the quality of the air. These regulations govern the amount of certain types of air pollutants that can be emitted from different types of emission sources. To determine which federal air emission regulations potentially apply to the fast pyrolysis biorefinery, we first identified the types of regulated air pollutants emitted to the ambient environment by the biorefinery or from specific equipment. Once the regulated air pollutants are identified, we review the applicability criteria of each federal air regulation to determine whether the fast pyrolysis biorefinery or specific equipment is subject to it. We then estimate the potential-to-emit of pollutants likely to be emitted from the fast pyrolysis biorefinery to understand the air permitting requirements.

  20. Promotion and Fast Food Demand

    OpenAIRE

    Timothy J. Richards; Luis Padilla

    2009-01-01

    Many believe that fast food promotion is a significant cause of the obesity epidemic in North America. Industry members argue that promotion only reallocates brand shares and does not increase overall demand. We study the effect of fast food promotion on market share and total demand by estimating a discrete / continuous model of fast food restaurant choice and food expenditure that explicitly accounts for both spatial and temporal determinants of demand. Estimates are obtained using a unique...

  1. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  2. Naive Probability: Model-based Estimates of Unique Events

    Science.gov (United States)

    2014-05-04

    of inference. Argument and Computation, 1–17, iFirst. Khemlani, S., & Johnson-Laird, P.N. (2012b). Theories of the syllogism: A meta -analysis...is the probability that… 1 space tourism will achieve widespread popularity in the next 50 years? advances in material science will lead to the... governments dedicate more resources to contacting extra-terrestrials? 8 the United States adopts an open border policy of universal acceptance? English is

  3. Issues in practical model-based diagnosis

    NARCIS (Netherlands)

    Bakker, R.R.; Bakker, R.R.; van den Bempt, P.C.A.; van den Bempt, P.C.A.; Mars, Nicolaas; Out, D.-J.; Out, D.J.; van Soest, D.C.; van Soes, D.C.

    1993-01-01

    The model-based diagnosis project at the University of Twente has been directed at improving the practical usefulness of model-based diagnosis. In cooperation with industrial partners, the research addressed the modeling problem and the efficiency problem in model-based reasoning. Main results of

  4. Model-based sensor diagnosis

    International Nuclear Information System (INIS)

    Milgram, J.; Dormoy, J.L.

    1994-09-01

    Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs

  5. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  6. Minimal sampling protocol for accurate estimation of urea production: a study with oral [13C]urea in fed and fasted piglets

    NARCIS (Netherlands)

    Oosterveld, Michiel J. S.; Gemke, Reinoud J. B. J.; Dainty, Jack R.; Kulik, Willem; Jakobs, Cornelis; de Meer, Kees

    2005-01-01

    An oral [13C]urea protocol may provide a simple method for measurement of urea production. The validity of single pool calculations in relation to a reduced sampling protocol was assessed. In eight fed and five fasted piglets, plasma urea enrichments from a 10 h sampling protocol were measured

  7. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  8. Model-based engineering for medical-device software.

    Science.gov (United States)

    Ray, Arnab; Jetley, Raoul; Jones, Paul L; Zhang, Yi

    2010-01-01

    This paper demonstrates the benefits of adopting model-based design techniques for engineering medical device software. By using a patient-controlled analgesic (PCA) infusion pump as a candidate medical device, the authors show how using models to capture design information allows for i) fast and efficient construction of executable device prototypes ii) creation of a standard, reusable baseline software architecture for a particular device family, iii) formal verification of the design against safety requirements, and iv) creation of a safety framework that reduces verification costs for future versions of the device software. 1.

  9. Fast and efficient indexing approach for object recognition

    Science.gov (United States)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  10. Diminishing musyarakah investment model based on equity

    Science.gov (United States)

    Jaffar, Maheran Mohd; Zain, Shaharir Mohamad; Jemain, Abdul Aziz

    2017-11-01

    Most of the mudharabah and musyarakah contract funds are involved in debt financing. This does not support the theory that profit sharing contract is better than that of debt financing due to the sharing of risks and ownership of equity. Indeed, it is believed that Islamic banking is a financial model based on equity or musyarakah which emphasis on the sharing of risks, profit and loss in the investment between the investor and entrepreneur. The focus of this paper is to introduce the mathematical model that internalizes diminishing musyarakah, the sharing of profit and equity between entrepreneur and investor. The entrepreneur pays monthly-differed payment to buy out the equity that belongs to the investor (bank) where at the end of the specified period, the entrepreneur owns the business and the investor (bank) exits the joint venture. The model is able to calculate the amount of equity at any time for both parties and hence would be a guide in helping to estimate the value of investment should the entrepreneur or investor exit before the end of the specified period. The model is closer to the Islamic principles for justice and fairness.

  11. Model based energy benchmarking for glass furnace

    International Nuclear Information System (INIS)

    Sardeshpande, Vishal; Gaitonde, U.N.; Banerjee, Rangan

    2007-01-01

    Energy benchmarking of processes is important for setting energy efficiency targets and planning energy management strategies. Most approaches used for energy benchmarking are based on statistical methods by comparing with a sample of existing plants. This paper presents a model based approach for benchmarking of energy intensive industrial processes and illustrates this approach for industrial glass furnaces. A simulation model for a glass furnace is developed using mass and energy balances, and heat loss equations for the different zones and empirical equations based on operating practices. The model is checked with field data from end fired industrial glass furnaces in India. The simulation model enables calculation of the energy performance of a given furnace design. The model results show the potential for improvement and the impact of different operating and design preferences on specific energy consumption. A case study for a 100 TPD end fired furnace is presented. An achievable minimum energy consumption of about 3830 kJ/kg is estimated for this furnace. The useful heat carried by glass is about 53% of the heat supplied by the fuel. Actual furnaces operating at these production scales have a potential for reduction in energy consumption of about 20-25%

  12. Estimating the impact of various menu labeling formats on parents' demand for fast-food kids' meals for their children: An experimental auction.

    Science.gov (United States)

    Hobin, Erin; Lillico, Heather; Zuo, Fei; Sacco, Jocelyn; Rosella, Laura; Hammond, David

    2016-10-01

    This study experimentally tested whether parents' demand for fast-food kids' meals for their children is influenced by various menu labeling formats disclosing calorie and sodium information. The study also examined the effect of various menu labeling formats on parents' ability to identify fast-food kids' meals with higher calorie and sodium content. Online surveys were conducted among parents of children aged 3-12. Parents were randomized to view 1 of 5 menu conditions: 1) No Nutrition Information; 2) Calories-Only; 3) Calories + Contextual Statement (CS); 4) Calories, Sodium, + CS; and, 5) Calorie and Sodium in Traffic Lights + CS. Using an established experimental auction study design, parents viewed replicated McDonald's menus according to their assigned condition and were asked to bid on 4 Happy Meals. A randomly selected price was chosen; bids equal to or above this price "won" the auction, and bids less than this price "lost" the auction. After the auction, participants were asked to identify the Happy Meal with the highest calories and sodium content. Adjusting for multiple comparisons and covariates, the Calories, Sodium, + CS menu had a mean attributed value across all 4 Happy Meals which was 8% lower (-$0.31) than the Calories + CS menu (p < 0.05). Significantly more parents in the 4 menu conditions providing calories were able to correctly identify the Happy Meal with the highest calories (p < 0.0001) and significantly more parents in the 2 conditions providing sodium information were able to correctly identify the Happy Meal with the highest sodium content (p < 0.0001). Menus disclosing both calories and sodium information may reduce demand for fast-food kids' meals and better support parents in making more informed and healthier food choices for their children. Copyright © 2016. Published by Elsevier Ltd.

  13. Development of a robust model-based reactivity control system

    International Nuclear Information System (INIS)

    Rovere, L.A.; Otaduy, P.J.; Brittain, C.R.

    1990-01-01

    This paper describes the development and implementation of a digital model-based reactivity control system that incorporates a knowledge of the plant physics into the control algorithm to improve system performance. This controller is composed of a model-based module and modified proportional-integral-derivative (PID) module. The model-based module has an estimation component to synthesize unmeasurable process variables that are necessary for the control action computation. These estimated variables, besides being used within the control algorithm, will be used for diagnostic purposes by a supervisory control system under development. The PID module compensates for inaccuracies in model coefficients by supplementing the model-based output with a correction term that eliminates any demand tracking or steady state errors. This control algorithm has been applied to develop controllers for a simulation of liquid metal reactors in a multimodular plant. It has shown its capability to track demands in neutron power much more accurately than conventional controllers, reducing overshoots to almost negligible value while providing a good degree of robustness to unmodeled dynamics. 10 refs., 4 figs

  14. Common variation in LMNA increases susceptibility to type 2 diabetes and associates with elevated fasting glycemia and estimates of body fat and height in the general population

    DEFF Research Database (Denmark)

    Wegner, Lise; Andersen, Gitte; Sparsø, Thomas

    2007-01-01

    . The minor T-allele of rs4641 was nominally associated with type 2 diabetes (odds ratio 1.14 [95% CI 1.03-1.26], P = 0.01) in a study of 1,324 type 2 diabetic patients and 4,386 glucose-tolerant subjects and with elevated fasting plasma glucose levels in a population-based study of 5,395 middle......-aged individuals (P = 0.008). The minor T-allele of rs955383 showed nominal association with obesity in a study of 5,693 treatment-naïve subjects (1.25 [1.07-1.64], P = 0.01), and after dichotomization of waist circumference, the minor alleles of rs955383 and rs11578696 were nominally associated with increased...... waist circumference (1.14 [1.04-1.23], P = 0.003; 1.12 [1.00-1.25], P = 0.04). The minor G-allele of rs577492 was associated with elevated fasting serum cholesterol and short stature (P = 3.0 . 10(-5) and P = 7.0 . 10(-4)). The findings are not corrected for multiple comparisons and are by nature...

  15. Development of Mathematical Model and Analysis Code for Estimating Drop Behavior of the Control Rod Assembly in the Sodium Cooled Fast Reactor

    International Nuclear Information System (INIS)

    Oh, Se-Hong; Kang, SeungHoon; Choi, Choengryul; Yoon, Kyung Ho; Cheon, Jin Sik

    2016-01-01

    On receiving the scram signal, the control rod assemblies are released to fall into the reactor core by its weight. Thus drop time and falling velocity of the control rod assembly must be estimated for the safety evaluation. There are three typical ways to estimate the drop behavior of the control rod assembly in scram action: Experimental, numerical and theoretical methods. But experimental and numerical(CFD) method require a lot of cost and time. Thus, these methods are difficult to apply to the initial design process. In this study, mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. Mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. A simplified control rod assembly model is considered to minimize the uncertainty in the development process. And the hydraulic circuit analysis technique is adopted to evaluate the internal/external flow distribution of the control rod assembly. Finally, the theoretical analysis code(named as HEXCON) has been developed based on the mathematical model. To verify the reliability of the developed code, CFD analysis has been conducted. And a calculation using the developed analysis code was carried out under the same condition, and both results were compared

  16. Model-based Control of a Bottom Fired Marine Boiler

    DEFF Research Database (Denmark)

    Solberg, Brian; Karstensen, Claus M. S.; Andersen, Palle

    2005-01-01

    This paper focuses on applying model based MIMO control to minimize variations in water level for a specific boiler type. A first principles model is put up. The model is linearized and an LQG controller is designed. Furthermore the benefit of using a steam °ow measurement is compared to a strategy...... relying on estimates of the disturbance. Preliminary tests at the boiler system show that the designed controller is able to control the boiler process. Furthermore it can be concluded that relying on estimates of the steam flow in the control strategy does not decrease the controller performance...

  17. Model-based Control of a Bottom Fired Marine Boiler

    DEFF Research Database (Denmark)

    Solberg, Brian; Karstensen, Claus M. S.; Andersen, Palle

    This paper focuses on applying model based MIMO control to minimize variations in water level for a specific boiler type. A first principles model is put up. The model is linearized and an LQG controller is designed. Furthermore the benefit of using a steam °ow measurement is compared to a strategy...... relying on estimates of the disturbance. Preliminary tests at the boiler system show that the designed controller is able to control the boiler process. Furthermore it can be concluded that relying on estimates of the steam flow in the control strategy does not decrease the controller performance...

  18. Estimation of cross sections of hypotetical 8n0, 10He2, 13Li3 nuclei production in the framework of fast fragmentation model

    International Nuclear Information System (INIS)

    Lozhkin, O.V.; Oplavin, V.S.; Yakovlev, Yu.P.

    1983-01-01

    The possibilities of search for 8 n 0 , 10 He 2 , 13 Li 3 nuclides in the products of nuclear fragmentation under the action of high energy particles are analysed. Conclusions have been drawn that: available experimental data on determination of the upper boundary of a cross section of 8 n 0 fragments production exclude an existence of this nuclide in the form ''usual'' nuclear system; available experimental estimations of cross sections of 10 He and 13 Li production among fragmentation products are, for the present, insufficient to solve a problem of 13 Li nucleus existence in a bound state but testify on 10 He nucleus nuclear instability; serious model estimations of have functions and nuclide binding energy are necessary

  19. Erratum: Hansen, Lund, Sangill, and Jespersen. Experimentally and Computationally Fast Method for Estimation of a Mean Kurtosis. Magnetic Resonance in Medicine 69:1754–1760 (2013)

    DEFF Research Database (Denmark)

    Hansen, Brian; Lund, Torben Ellegaard; Sangill, Ryan

    2014-01-01

    PURPOSE: Results from several recent studies suggest the magnetic resonance diffusion-derived metric mean kurtosis (MK) to be a sensitive marker for tissue pathology; however, lengthy acquisition and postprocessing time hamper further exploration. The purpose of this study is to introduce...... and evaluate a new MK metric and a rapid protocol for its estimation. METHODS: The protocol requires acquisition of 13 standard diffusion-weighted images, followed by linear combination of log diffusion signals, thus avoiding nonlinear optimization. The method was evaluated on an ex vivo rat brain...... for full human brain coverage, with a postprocessing time of a few seconds. Scan-rescan reproducibility was comparable with MK. CONCLUSION: The framework offers a robust and rapid method for estimating MK, with a protocol easily adapted on commercial scanners, as it requires only minimal modification...

  20. A Fast Multimodal Ectopic Beat Detection Method Applied for Blood Pressure Estimation Based on Pulse Wave Velocity Measurements in Wearable Sensors.

    Science.gov (United States)

    Pflugradt, Maik; Geissdoerfer, Kai; Goernig, Matthias; Orglmeister, Reinhold

    2017-01-14

    Automatic detection of ectopic beats has become a thoroughly researched topic, with literature providing manifold proposals typically incorporating morphological analysis of the electrocardiogram (ECG). Although being well understood, its utilization is often neglected, especially in practical monitoring situations like online evaluation of signals acquired in wearable sensors. Continuous blood pressure estimation based on pulse wave velocity considerations is a prominent example, which depends on careful fiducial point extraction and is therefore seriously affected during periods of increased occurring extrasystoles. In the scope of this work, a novel ectopic beat discriminator with low computational complexity has been developed, which takes advantage of multimodal features derived from ECG and pulse wave relating measurements, thereby providing additional information on the underlying cardiac activity. Moreover, the blood pressure estimations' vulnerability towards ectopic beats is closely examined on records drawn from the Physionet database as well as signals recorded in a small field study conducted in a geriatric facility for the elderly. It turns out that a reliable extrasystole identification is essential to unsupervised blood pressure estimation, having a significant impact on the overall accuracy. The proposed method further convinces by its applicability to battery driven hardware systems with limited processing power and is a favorable choice when access to multimodal signal features is given anyway.

  1. Robust extrapolation scheme for fast estimation of 3D Ising field partition functions: application to within subject fMRI data

    Energy Technology Data Exchange (ETDEWEB)

    Risser, L.; Vincent, T.; Ciuciu, Ph. [NeuroSpin CEA, F-91191 Gif sur Yvette (France); Risser, L.; Vincent, T. [Laboratoire de Neuroimagerie Assistee par Ordinateur (LNAO) CEA - DSV/I2BM/NEUROSPIN (France); Risser, L. [Institut de mecanique des fluides de Toulouse (IMFT), CNRS: UMR5502 - Universite Paul Sabatier - Toulouse III - Institut National Polytechnique de Toulouse - INPT (France); Idier, J. [Institut de Recherche en Communications et en Cybernetique de Nantes (IRCCyN) CNRS - UMR6597 - Universite de Nantes - ecole Centrale de Nantes - Ecole des Mines de Nantes - Ecole Polytechnique de l' Universite de Nantes (France)

    2009-07-01

    In this paper, we present a first numerical scheme to estimate Partition Functions (PF) of 3D Ising fields. Our strategy is applied to the context of the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated regions and estimate region-dependent, hemodynamic filters. For any region, a specific binary Markov random field may embody spatial correlation over the hidden states of the voxels by modeling whether they are activated or not. To make this spatial regularization fully adaptive, our approach is first based upon it, classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, file proposed extrapolation method allows its to approximate the PFs associated with the Ising fields defined over the remaining brain regions. In comparison with preexisting approaches, our method is robust; to topological inhomogeneities in the definition of the reference regions. As a result, it strongly alleviates the computational burden and makes spatially adaptive regularization of whole brain fMRI datasets feasible. (authors)

  2. Observer-Based and Regression Model-Based Detection of Emerging Faults in Coal Mills

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Lin, Bao; Jørgensen, Sten Bay

    2006-01-01

    In order to improve the reliability of power plants it is important to detect fault as fast as possible. Doing this it is interesting to find the most efficient method. Since modeling of large scale systems is time consuming it is interesting to compare a model-based method with data driven ones....

  3. Estimation of the sub-criticality of the sodium-cooled fast reactor Monju using the modified neutron source multiplication method

    International Nuclear Information System (INIS)

    Truchet, G.; Van Rooijen, W. F. G.; Shimazu, Y.; Yamaguchi, K.

    2012-01-01

    The Modified Neutron Source Method (MNSM) is applied to the Monju reactor. This static method to estimate sub-criticality has already given good results on commercial Pressurized Water Reactors. The MNSM consists both in the extraction of the fundamental mode seen by a detector to avoid the effect of higher modes near sources, and the correction of flux distortion effects due to control rod movement. Among Monju's particularities that have a big influence on MNSM factors are: the presence of two californium sources and the position of the detector which is located far from the core outside of the reactor vessel. The importance of spontaneous fission and (α, n) reactions which have increased during the shutdown period of 15 years will also be discussed. The relative position of detectors and sources deeply affect the correction factors in some regions. In order to evaluate the detector count rate, an analytical propagation has been conducted from the reactor vessel. For two subcritical states, an estimation of the reactivity has been made and compared to experimental data obtained in the restart experiments at Monju (2010). (authors)

  4. Estimates of time-dependent fatigue behavior of Type 316 stainless steel subject to irradiation damage in fast breeder and fusion power reactor systems

    International Nuclear Information System (INIS)

    Brinkman, C.R.; Liu, K.C.; Grossbeck, M.L.

    1978-01-01

    Cyclic lives obtained from strain-controlled fatigue tests at 593 0 C of specimens irradiated in the experimental breeder reactor II (EBR-II) to a fluence of 1 to 2.63*10 26 neutrons (n)/m 2 (E>0.1 MeV) were compared with predictions based on the method of strain-range partitioning. It was demonstrated that, when appropriate tensile and creep-rupture ductilities were employed, reasonably good estimates of the influence of hold periods and irradiation damage on the fully reversed fatigue life of Type 316 stainless steel could be made. After applicability of this method was demonstrated, ductility values for 20 percent cold-worked Type 316 stainless steel specimens irradiated in a mixed-spectrum fission reactor were used to estimate fusion reactor first-wall lifetime. The ductility values used were from irradiations that simulate the environment of the first wall of a fusion reactor. Neutron wall loadings ranging from 2 to 5 MW/m 2 were used. 27 refs

  5. Fast reactors

    International Nuclear Information System (INIS)

    Vasile, A.

    2001-01-01

    Fast reactors have capacities to spare uranium natural resources by their breeding property and to propose solutions to the management of radioactive wastes by limiting the inventory of heavy nuclei. This article highlights the role that fast reactors could play for reducing the radiotoxicity of wastes. The conversion of 238 U into 239 Pu by neutron capture is more efficient in fast reactors than in light water reactors. In fast reactors multi-recycling of U + Pu leads to fissioning up to 95% of the initial fuel ( 238 U + 235 U). 2 strategies have been studied to burn actinides: - the multi-recycling of heavy nuclei is made inside the fuel element (homogeneous option); - the unique recycling is made in special irradiation targets placed inside the core or at its surroundings (heterogeneous option). Simulations have shown that, for the same amount of energy produced (400 TWhe), the mass of transuranium elements (Pu + Np + Am + Cm) sent to waste disposal is 60,9 Kg in the homogeneous option and 204.4 Kg in the heterogeneous option. Experimental programs are carried out in Phenix and BOR60 reactors in order to study the feasibility of such strategies. (A.C.)

  6. Fast ejendom

    DEFF Research Database (Denmark)

    Pagh, Peter

    Bogen omfatter en gennemgang af lovgivning, praksis og teori vedrørende køb af fast ejendom og offentligretlig og privatretlig regulering. Bogen belyser bl.a. de privatretlige emner: købers misligholdelsesbeføjelser, servitutter, naboret, hævd og erstatningsansvar for miljøskader samt den...

  7. Model-based dispersive wave processing: A recursive Bayesian solution

    International Nuclear Information System (INIS)

    Candy, J.V.; Chambers, D.H.

    1999-01-01

    Wave propagation through dispersive media represents a significant problem in many acoustic applications, especially in ocean acoustics, seismology, and nondestructive evaluation. In this paper we propose a propagation model that can easily represent many classes of dispersive waves and proceed to develop the model-based solution to the wave processing problem. It is shown that the underlying wave system is nonlinear and time-variable requiring a recursive processor. Thus the general solution to the model-based dispersive wave enhancement problem is developed using a Bayesian maximum a posteriori (MAP) approach and shown to lead to the recursive, nonlinear extended Kalman filter (EKF) processor. The problem of internal wave estimation is cast within this framework. The specific processor is developed and applied to data synthesized by a sophisticated simulator demonstrating the feasibility of this approach. copyright 1999 Acoustical Society of America.

  8. Estimation of fast neutron fluence in steel specimens type Laguna Verde in TRIGA Mark III reactor; Estimacion de la fluencia de neutrones rapidos en probetas de acero tipo Laguna Verde en el reactor Triga Mark III

    Energy Technology Data Exchange (ETDEWEB)

    Galicia A, J.; Francois L, J. L. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Aguilar H, F., E-mail: blink19871@hotmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2015-09-15

    The main purpose of this work is to obtain the fluence of fast neutrons recorded within four specimens of carbon steel, similar to the material having the vessels of the BWR reactors of the nuclear power plant of Laguna Verde when subjected to neutron flux in a experimental facility of the TRIGA Mark III reactor, calculating an irradiation time to age the material so accelerated. For the calculation of the neutron flux in the specimens was used the Monte Carlo code MCNP5. In an initial stage, three sheets of natural molybdenum and molybdenum trioxide (MoO{sub 3}) were incorporated into a model developed of the TRIGA reactor operating at 1 M Wth, to calculate the resulting activity by setting a certain time of irradiation. The results obtained were compared with experimentally measured activities in these same materials to validate the calculated neutron flux in the model used. Subsequently, the fast neutron flux received by the steel specimens to incorporate them in the experimental facility E-16 of the reactor core model operating at nominal maximum power in steady-state was calculated, already from these calculations the irradiation time required was obtained for values of the neutron flux in the range of 10{sup 18} n/cm{sup 2}, which is estimated for the case of Laguna Verde after 32 years of effective operation at maximum power. (Author)

  9. A fast and simple approach for the estimation of a radiological source from localised measurements after the explosion of a radiological dispersal device

    International Nuclear Information System (INIS)

    Urso, L.; Kaiser, J.C.; Woda, C.; Helebrant, J.; Hulka, J.; Kuca, P.; Prouza, Z.

    2014-01-01

    After an explosion of a radiological dispersal device, decision-makers need to implement countermeasures as soon as possible to minimise the radiation-induced risks to the population. In this work, the authors present a tool, which can help providing information about the approximate size of source term and radioactive contamination based on a Gaussian Plume model with the use of available measurements for liquid or aerosolised radioactivity. For two-field tests, the source term and spatial distribution of deposited radioactivity are estimated. A sensitivity analysis of the dependence on deposition velocity is carried out. In case of weak winds, a diffusive process along the wind direction is retained in the model. (authors)

  10. Neighborhood fast food restaurants and fast food consumption: A national study

    OpenAIRE

    Richardson, Andrea S; Boone-Heinonen, Janne; Popkin, Barry M; Gordon-Larsen, Penny

    2011-01-01

    Abstract Background Recent studies suggest that neighborhood fast food restaurant availability is related to greater obesity, yet few studies have investigated whether neighborhood fast food restaurant availability promotes fast food consumption. Our aim was to estimate the effect of neighborhood fast food availability on frequency of fast food consumption in a national sample of young adults, a population at high risk for obesity. Methods We used national data from U.S. young adults enrolled...

  11. Estimates of time-dependent fatigue behavior of type 316 stainless steel subject to irradiation damage in fast breeder and fusion power reactor systems

    International Nuclear Information System (INIS)

    Brinkman, C.R.; Liu, K.C.; Grossbeck, M.L.

    1979-01-01

    Cyclic lives obtained from strain-controlled fatigue tests at 593 0 C of specimens irraidated in the experimental breeder reactor II (EBR-II) to a fluence of 1 to 2.63 x 10 26 neutrons (n)/m 2 E > 0.1 MeV) were compared with predictions based on the method of strain-range partitioning. It was demonstrated that, when appropriate tensile and creep-rupture ductilities were employed, reasonably good estimates of the influence of hold periods and irradiation damage on the fully reversed fatigue life of Type 316 stainless steel could be made. After applicability of this method was demonstrated, ductility values for 20% cold-worked Type 316 stainless steel specimens irradiated in a mixed-spectrum fission reactor were used to estimate fusion reactor first-wall lifetime. The ductility values used were from irradations that simulate the environment of the first wall of a fusion reactor. Neutron wall loadins ranging from 2 to 5 MW/m 2 were used. Results, although conjectural because of the many assumptions, tended to show that 20% cold-worked Type 316 stainless steel could be used as a first-wall material meeting a 7.5 go 8.5 MW-year/m 2 lifetime goal provided the neutron wall loading does not exceed more than about 2 MW/m 2 . These results were obtained for an air environment, ant it is expected that the actual vacuum environment will extend lifetime beyond 10 MW-year/m 2

  12. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  13. Kalman Filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry.

    Science.gov (United States)

    Zhang, Yuxin; Chen, Shuo; Deng, Kexin; Chen, Bingyao; Wei, Xing; Yang, Jiafei; Wang, Shi; Ying, Kui

    2017-01-01

    To develop a self-adaptive and fast thermometry method by combining the original hybrid magnetic resonance thermometry method and the bio heat transfer equation (BHTE) model. The proposed Kalman filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry, abbreviated as KalBHT hybrid method, introduced the BHTE model to synthesize a window on the regularization term of the hybrid algorithm, which leads to a self-adaptive regularization both spatially and temporally with change of temperature. Further, to decrease the sensitivity to accuracy of the BHTE model, Kalman filter is utilized to update the window at each iteration time. To investigate the effect of the proposed model, computer heating simulation, phantom microwave heating experiment and dynamic in-vivo model validation of liver and thoracic tumor were conducted in this study. The heating simulation indicates that the KalBHT hybrid algorithm achieves more accurate results without adjusting λ to a proper value in comparison to the hybrid algorithm. The results of the phantom heating experiment illustrate that the proposed model is able to follow temperature changes in the presence of motion and the temperature estimated also shows less noise in the background and surrounding the hot spot. The dynamic in-vivo model validation with heating simulation demonstrates that the proposed model has a higher convergence rate, more robustness to susceptibility problem surrounding the hot spot and more accuracy of temperature estimation. In the healthy liver experiment with heating simulation, the RMSE of the hot spot of the proposed model is reduced to about 50% compared to the RMSE of the original hybrid model and the convergence time becomes only about one fifth of the hybrid model. The proposed model is able to improve the accuracy of the original hybrid algorithm and accelerate the convergence rate of MR temperature estimation.

  14. Model-based internal wave processing

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  15. Model Based Control of Reefer Container Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær

    This thesis is concerned with the development of model based control for the Star Cool refrigerated container (reefer) with the objective of reducing energy consumption. This project has been carried out under the Danish Industrial PhD programme and has been financed by Lodam together with the Da......This thesis is concerned with the development of model based control for the Star Cool refrigerated container (reefer) with the objective of reducing energy consumption. This project has been carried out under the Danish Industrial PhD programme and has been financed by Lodam together...

  16. A model-based approach to predict muscle synergies using optimization: application to feedback control

    Directory of Open Access Journals (Sweden)

    Reza eSharif Razavian

    2015-10-01

    Full Text Available This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e. they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems. This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  17. A model-based approach to predict muscle synergies using optimization: application to feedback control.

    Science.gov (United States)

    Sharif Razavian, Reza; Mehrabi, Naser; McPhee, John

    2015-01-01

    This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e., they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems). This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort) in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics.

  18. A novel, fast and efficient single-sensor automatic sleep-stage classification based on complementary cross-frequency coupling estimates.

    Science.gov (United States)

    Dimitriadis, Stavros I; Salis, Christos; Linden, David

    2018-04-01

    Limitations of the manual scoring of polysomnograms, which include data from electroencephalogram (EEG), electro-oculogram (EOG), electrocardiogram (ECG) and electromyogram (EMG) channels have long been recognized. Manual staging is resource intensive and time consuming, and thus considerable effort must be spent to ensure inter-rater reliability. As a result, there is a great interest in techniques based on signal processing and machine learning for a completely Automatic Sleep Stage Classification (ASSC). In this paper, we present a single-EEG-sensor ASSC technique based on the dynamic reconfiguration of different aspects of cross-frequency coupling (CFC) estimated between predefined frequency pairs over 5 s epoch lengths. The proposed analytic scheme is demonstrated using the PhysioNet Sleep European Data Format (EDF) Database with repeat recordings from 20 healthy young adults. We validate our methodology in a second sleep dataset. We achieved very high classification sensitivity, specificity and accuracy of 96.2 ± 2.2%, 94.2 ± 2.3%, and 94.4 ± 2.2% across 20 folds, respectively, and also a high mean F1 score (92%, range 90-94%) when a multi-class Naive Bayes classifier was applied. High classification performance has been achieved also in the second sleep dataset. Our method outperformed the accuracy of previous studies not only on different datasets but also on the same database. Single-sensor ASSC makes the entire methodology appropriate for longitudinal monitoring using wearable EEG in real-world and laboratory-oriented environments. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  19. Model based rib-cage unfolding for trauma CT

    Science.gov (United States)

    von Berg, Jens; Klinder, Tobias; Lorenz, Cristian

    2018-03-01

    A CT rib-cage unfolding method is proposed that does not require to determine rib centerlines but determines the visceral cavity surface by model base segmentation. Image intensities are sampled across this surface that is flattened using a model based 3D thin-plate-spline registration. An average rib centerline model projected onto this surface serves as a reference system for registration. The flattening registration is designed so that ribs similar to the centerline model are mapped onto parallel lines preserving their relative length. Ribs deviating from this model appear deviating from straight parallel ribs in the unfolded view, accordingly. As the mapping is continuous also the details in intercostal space and those adjacent to the ribs are rendered well. The most beneficial application area is Trauma CT where a fast detection of rib fractures is a crucial task. Specifically in trauma, automatic rib centerline detection may not be guaranteed due to fractures and dislocations. The application by visual assessment on the large public LIDC data base of lung CT proved general feasibility of this early work.

  20. Binaural model-based dynamic-range compression.

    Science.gov (United States)

    Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D

    2018-01-26

    Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.

  1. GPU-accelerated 3-D model-based tracking

    International Nuclear Information System (INIS)

    Brown, J Anthony; Capson, David W

    2010-01-01

    Model-based approaches to tracking the pose of a 3-D object in video are effective but computationally demanding. While statistical estimation techniques, such as the particle filter, are often employed to minimize the search space, real-time performance remains unachievable on current generation CPUs. Recent advances in graphics processing units (GPUs) have brought massively parallel computational power to the desktop environment and powerful developer tools, such as NVIDIA Compute Unified Device Architecture (CUDA), have provided programmers with a mechanism to exploit it. NVIDIA GPUs' single-instruction multiple-thread (SIMT) programming model is well-suited to many computer vision tasks, particularly model-based tracking, which requires several hundred 3-D model poses to be dynamically configured, rendered, and evaluated against each frame in the video sequence. Using 6 degree-of-freedom (DOF) rigid hand tracking as an example application, this work harnesses consumer-grade GPUs to achieve real-time, 3-D model-based, markerless object tracking in monocular video.

  2. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  3. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  4. An acoustical model based monitoring network

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2010-01-01

    In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the

  5. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation

  6. Opinion dynamics model based on quantum formalism

    Energy Technology Data Exchange (ETDEWEB)

    Artawan, I. Nengah, E-mail: nengahartawan@gmail.com [Theoretical Physics Division, Department of Physics, Udayana University (Indonesia); Trisnawati, N. L. P., E-mail: nlptrisnawati@gmail.com [Biophysics, Department of Physics, Udayana University (Indonesia)

    2016-03-11

    Opinion dynamics model based on quantum formalism is proposed. The core of the quantum formalism is on the half spin dynamics system. In this research the implicit time evolution operators are derived. The analogy between the model with Deffuant dan Sznajd models is discussed.

  7. Model-based auditing using REA

    NARCIS (Netherlands)

    Weigand, H.; Elsas, P.

    2012-01-01

    The recent financial crisis has renewed interest in the value of the owner-ordered auditing tradition that starts from society's long-term interest rather than management interest. This tradition uses a model-based auditing approach in which control requirements are derived in a principled way. A

  8. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  9. FAST goes underground

    International Nuclear Information System (INIS)

    Fridlund, P.S.

    1985-01-01

    The FAST-M Cost Estimating Model is a parametric model designed to determine the costs associated with mining and subterranean operations. It is part of the FAST (Freiman Analysis of Systems Techniques) series of parametric models developed by Freiman Parametric Systems, Inc. The rising cost of fossil fuels has created a need for a method which could be used to determine and control costs in mining and subterranean operations. FAST-M fills this need and also provides scheduling information. The model works equally well for a variety of situations including underground vaults for hazardous waste storage, highway tunnels, and mass transit tunnels. In addition, costs for above ground structures and equipment can be calculated. The input for the model may be on a macro or a micro level. This allows the model to be used at various stages in a project. On the macro level, only general conditions and specifications need to be known. On the micro level, the smallest details may be included. As with other FAST models, reference cases are used to more accurately predict costs and scheduling. This paper will address how the model can be used for a variety of subterranean purposes

  10. Fast tomosynthesis

    International Nuclear Information System (INIS)

    Klotz, E.; Linde, R.; Tiemens, U.; Weiss, H.

    1978-01-01

    A system has been constructed for fast tomosynthesis, whereby X-ray photographs are made of a single layer of an object. Twenty five X-ray tubes illuminate the object simultaneously at different angles. The resulting coded image is decoded by projecting it with a pattern of lenses that have the same form as the pattern of X-ray tubes. The coded image is optically correlated with the pattern of the sources. The scale of this can be adjusted so that the desired layer of the object is portrayed. Experimental results of its use in a hospital are presented. (C.F.)

  11. Model-based cartilage thickness measurement in the submillimeter range

    International Nuclear Information System (INIS)

    Streekstra, G. J.; Strackee, S. D.; Maas, M.; Wee, R. ter; Venema, H. W.

    2007-01-01

    Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness was varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical

  12. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  13. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  14. Springer handbook of model-based science

    CERN Document Server

    Bertolotti, Tommaso

    2017-01-01

    The handbook offers the first comprehensive reference guide to the interdisciplinary field of model-based reasoning. It highlights the role of models as mediators between theory and experimentation, and as educational devices, as well as their relevance in testing hypotheses and explanatory functions. The Springer Handbook merges philosophical, cognitive and epistemological perspectives on models with the more practical needs related to the application of this tool across various disciplines and practices. The result is a unique, reliable source of information that guides readers toward an understanding of different aspects of model-based science, such as the theoretical and cognitive nature of models, as well as their practical and logical aspects. The inferential role of models in hypothetical reasoning, abduction and creativity once they are constructed, adopted, and manipulated for different scientific and technological purposes is also discussed. Written by a group of internationally renowned experts in ...

  15. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  16. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  17. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  18. PV panel model based on datasheet values

    DEFF Research Database (Denmark)

    Sera, Dezso; Teodorescu, Remus; Rodriguez, Pedro

    2007-01-01

    This work presents the construction of a model for a PV panel using the single-diode five-parameters model, based exclusively on data-sheet parameters. The model takes into account the series and parallel (shunt) resistance of the panel. The equivalent circuit and the basic equations of the PV cell....... Based on these equations, a PV panel model, which is able to predict the panel behavior in different temperature and irradiance conditions, is built and tested....

  19. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  20. A model-based risk management framework

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjoern Axel; Fredriksen, Rune

    2002-08-15

    The ongoing research activity addresses these issues through two co-operative activities. The first is the IST funded research project CORAS, where Institutt for energiteknikk takes part as responsible for the work package for Risk Analysis. The main objective of the CORAS project is to develop a framework to support risk assessment of security critical systems. The second, called the Halden Open Dependability Demonstrator (HODD), is established in cooperation between Oestfold University College, local companies and HRP. The objective of HODD is to provide an open-source test bed for testing, teaching and learning about risk analysis methods, risk analysis tools, and fault tolerance techniques. The Inverted Pendulum Control System (IPCON), which main task is to keep a pendulum balanced and controlled, is the first system that has been established. In order to make risk assessment one need to know what a system does, or is intended to do. Furthermore, the risk assessment requires correct descriptions of the system, its context and all relevant features. A basic assumption is that a precise model of this knowledge, based on formal or semi-formal descriptions, such as UML, will facilitate a systematic risk assessment. It is also necessary to have a framework to integrate the different risk assessment methods. The experiences so far support this hypothesis. This report presents CORAS and the CORAS model-based risk management framework, including a preliminary guideline for model-based risk assessment. The CORAS framework for model-based risk analysis offers a structured and systematic approach to identify and assess security issues of ICT systems. From the initial assessment of IPCON, we also believe that the framework is applicable in a safety context. Further work on IPCON, as well as the experiences from the CORAS trials, will provide insight and feedback for further improvements. (Author)

  1. Least-squares model-based halftoning

    Science.gov (United States)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  2. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studies...... are based on 2D images, but few are based on 3D information. In this paper, we present a model-based approach for tracking infants in 3D. The study extends a novel study on graph-based motion tracking of infants and we show that the extension improves the tracking results. A 3D model is constructed...

  3. Model-Based Power Plant Master Control

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Katarina; Thomas, Jean; Funkquist, Jonas

    2010-08-15

    The main goal of the project has been to evaluate the potential of a coordinated master control for a solid fuel power plant in terms of tracking capability, stability and robustness. The control strategy has been model-based predictive control (MPC) and the plant used in the case study has been the Vattenfall power plant Idbaecken in Nykoeping. A dynamic plant model based on nonlinear physical models was used to imitate the true plant in MATLAB/SIMULINK simulations. The basis for this model was already developed in previous Vattenfall internal projects, along with a simulation model of the existing control implementation with traditional PID controllers. The existing PID control is used as a reference performance, and it has been thoroughly studied and tuned in these previous Vattenfall internal projects. A turbine model was developed with characteristics based on the results of steady-state simulations of the plant using the software EBSILON. Using the derived model as a representative for the actual process, an MPC control strategy was developed using linearization and gain-scheduling. The control signal constraints (rate of change) and constraints on outputs were implemented to comply with plant constraints. After tuning the MPC control parameters, a number of simulation scenarios were performed to compare the MPC strategy with the existing PID control structure. The simulation scenarios also included cases highlighting the robustness properties of the MPC strategy. From the study, the main conclusions are: - The proposed Master MPC controller shows excellent set-point tracking performance even though the plant has strong interactions and non-linearity, and the controls and their rate of change are bounded. - The proposed Master MPC controller is robust, stable in the presence of disturbances and parameter variations. Even though the current study only considered a very small number of the possible disturbances and modelling errors, the considered cases are

  4. Fasting - the ultimate diet?

    Science.gov (United States)

    Johnstone, A M

    2007-05-01

    Adult humans often undertake acute fasts for cosmetic, religious or medical reasons. For example, an estimated 14% of US adults have reported using fasting as a means to control body weight and this approach has long been advocated as an intermittent treatment for gross refractory obesity. There are unique historical data sets on extreme forms of food restriction that give insight into the consequences of starvation or semi-starvation in previously healthy, but usually non-obese subjects. These include documented medical reports on victims of hunger strike, famine and prisoners of war. Such data provide a detailed account on how the body adapts to prolonged starvation. It has previously been shown that fasting for the biblical period of 40 days and 40 nights is well within the overall physiological capabilities of a healthy adult. However, the specific effects on the human body and mind are less clearly documented, either in the short term (hours) or in the longer term (days). This review asks the following three questions, pertinent to any weight-loss therapy, (i) how effective is the regime in achieving weight loss, (ii) what impact does it have on psychology? and finally, (iii) does it work long-term?

  5. ADT fast losses MD

    CERN Document Server

    Priebe, A; Dehning, B; Redaelli, S; Salvachua Ferrando, BM; Sapinski, M; Solfaroli Camillocci, M; Valuch, D

    2013-01-01

    The fast beam losses in the order of 1 ms are expected to be a potential major luminosity limitation for higher beam energies after the LHC long shutdown (LS1). Therefore a Quench Test is planned in the winter 2013 to estimate the quench limit in this timescale and revise the current models. This experiment was devoted to determination the LHC Transverse Damper (ADT) as a system for fast losses induction. A non-standard operation of the ADT was used to develop the beam oscillation instead of suppressing them. The sign flip method had allowed us to create the fast losses within several LHC turns at 450 GeV during the previous test (26th March 2012). Thus, the ADT could be potentially used for the studies of the UFO ("Unidentied Falling Object") impact on the cold magnets. Verification of the system capability and investigations of the disturbed beam properties were the main objectives of this MD. During the experiment, the pilot bunches of proton beam were excited independently in the horizontal and vertical ...

  6. SLS Model Based Design: A Navigation Perspective

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  7. Sandboxes for Model-Based Inquiry

    Science.gov (United States)

    Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri

    2015-04-01

    In this article, we introduce a class of constructionist learning environments that we call Emergent Systems Sandboxes ( ESSs), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual construction environment that support students in creating, exploring, and sharing computational models of dynamic systems that exhibit emergent phenomena. They provide learners with "entity"-level construction primitives that reflect an underlying scientific model. These primitives can be directly "painted" into a sandbox space, where they can then be combined, arranged, and manipulated to construct complex systems and explore the emergent properties of those systems. We argue that ESSs offer a means of addressing some of the key barriers to adopting rich, constructionist model-based inquiry approaches in science classrooms at scale. Situating the ESS in a large-scale science modeling curriculum we are implementing across the USA, we describe how the unique "entity-level" primitive design of an ESS facilitates knowledge system refinement at both an individual and social level, we describe how it supports flexible modeling practices by providing both continuous and discrete modes of executability, and we illustrate how it offers students a variety of opportunities for validating their qualitative understandings of emergent systems as they develop.

  8. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Calculation of the neutron parameters of fast thermal reactor

    International Nuclear Information System (INIS)

    Kukuleanu, V.; Mocioiu, D.; Drutse, E.; Konstantinesku, E.

    1975-01-01

    The system of neutron calculation for fast reactors is given. This system was used for estimation of physical parameters of fast thermal reactors investigated. The results obtained and different specific problems of the reactors of this type are described. (author)

  10. Muscles of mastication model-based MR image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Ong, S.H. [National Univ. of Singapore (Singapore). Dept. of Electrical and Computer Engineering; National Univ. of Singapore (Singapore). Div. of Bioengineering; Hu, Q.; Nowinski, W.L. [Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National Univ. of Singapore (Singapore). Dept. of Preventive Dentistry; Goh, P.S. [National Univ. of Singapore (Singapore). Dept. of Diagnostic Radiology

    2006-11-15

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  11. Muscles of mastication model-based MR image segmentation

    International Nuclear Information System (INIS)

    Ng, H.P.; Agency for Science Technology and Research, Singapore; Ong, S.H.; National Univ. of Singapore; Hu, Q.; Nowinski, W.L.; Foong, K.W.C.; National Univ. of Singapore; Goh, P.S.

    2006-01-01

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  12. Model-based energy monitoring and diagnosis of telecommunication cooling systems

    International Nuclear Information System (INIS)

    Sorrentino, Marco; Acconcia, Matteo; Panagrosso, Davide; Trifirò, Alena

    2016-01-01

    A methodology is proposed for on-line monitoring of cooling load supplied by Telecommunication (TLC) cooling systems. Sensible cooling load is estimated via a proportional integral controller-based input estimator, whereas a lumped parameters model was developed aiming at estimating air handling units (AHUs) latent heat load removal. The joint deployment of above estimators enables accurate prediction of total cooling load, as well as of related AHUs and free-coolers energy performance. The procedure was then proven effective when extended to cooling systems having a centralized chiller, through model-based estimation of a key performance metric, such as the energy efficiency ratio. The results and experimental validation presented throughout the paper confirm the suitability of the proposed procedure as a reliable and effective energy monitoring and diagnostic tool for TLC applications. Moreover, the proposed modeling approach, beyond its direct contribution towards smart use and conservation of energy, can be fruitfully deployed as a virtual sensor of removed heat load into a variety of residential and industrial applications. - Highlights: • Accurate cooling load prediction in telecommunication rooms. • Development of an input-estimator for sensible cooling load simulation. • Model-based estimation of latent cooling load. • Model-based prediction of centralized chiller energy performance in central offices. • Diagnosis-oriented application of proposed cooling load estimator.

  13. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    Science.gov (United States)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  14. Fast Convolution Module (Fast Convolution Module)

    National Research Council Canada - National Science Library

    Bierens, L

    1997-01-01

    This report describes the design and realisation of a real-time range azimuth compression module, the so-called 'Fast Convolution Module', based on the fast convolution algorithm developed at TNO-FEL...

  15. Neuro-fuzzy system modeling based on automatic fuzzy clustering

    Institute of Scientific and Technical Information of China (English)

    Yuangang TANG; Fuchun SUN; Zengqi SUN

    2005-01-01

    A neuro-fuzzy system model based on automatic fuzzy clustering is proposed.A hybrid model identification algorithm is also developed to decide the model structure and model parameters.The algorithm mainly includes three parts:1) Automatic fuzzy C-means (AFCM),which is applied to generate fuzzy rules automatically,and then fix on the size of the neuro-fuzzy network,by which the complexity of system design is reducesd greatly at the price of the fitting capability;2) Recursive least square estimation (RLSE).It is used to update the parameters of Takagi-Sugeno model,which is employed to describe the behavior of the system;3) Gradient descent algorithm is also proposed for the fuzzy values according to the back propagation algorithm of neural network.Finally,modeling the dynamical equation of the two-link manipulator with the proposed approach is illustrated to validate the feasibility of the method.

  16. A General Accelerated Degradation Model Based on the Wiener Process.

    Science.gov (United States)

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  17. Combined Usage of TanDEM-X and CryoSat-2 for Generating a High Resolution Digital Elevation Model of Fast Moving Ice Stream and Its Application in Grounding Line Estimation

    Directory of Open Access Journals (Sweden)

    Seung Hee Kim

    2017-02-01

    Full Text Available Definite surface topography of ice provides fundamental information for most glaciologists to study climate change. However, the topography at the marginal region of ice sheets exhibits noticeable dynamical changes from fast flow velocity and large thinning rates; thus, it is difficult to determine instantaneous topography. In this study, the surface topography of the marginal region of Thwaites Glacier in the Amundsen Sector of West Antarctica, where ice melting and thinning are prevailing, is extracted using TanDEM-X interferometry in combination with data from the near-coincident CryoSat-2 radar altimeter. The absolute height offset, which has been a persistent problem in applying the interferometry technique for generating DEMs, is determined by linear least-squares fitting between the uncorrected TanDEM-X heights and reliable reference heights from CryoSat-2. The reliable heights are rigorously selected at locations of high normalized cross-correlation and low RMS heights between segments of data points. The generated digital elevation model with the resolved absolute height offset is assessed with airborne laser altimeter data from the Operation IceBridge that were acquired five months after TanDEM-X and show high correlation with biases of 3.19 m and −4.31 m at the grounding zone and over the ice sheet surface, respectively. For practical application of the generated DEM, grounding line estimation assuming hydrostatic equilibrium was carried out, and the feasibility was seen through comparison with the previous grounding line. Finally, it is expected that the combination of interferometry and altimetery with similar datasets can be applied at regions even with a lack of ground control points.

  18. Model Based Control of Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Finn Sloth

    for automation of these procedures, that is to incorporate some "intelligence" in the control system, this project was started up. The main emphasis of this work has been on model based methods for system optimizing control in supermarket refrigeration systems. The idea of implementing a system optimizing...... control is to let an optimization procedure take over the task of operating the refrigeration system and thereby replace the role of the operator in the traditional control structure. In the context of refrigeration systems, the idea is to divide the optimizing control structure into two parts: A part...... optimizing the steady state operation "set-point optimizing control" and a part optimizing dynamic behaviour of the system "dynamical optimizing control". A novel approach for set-point optimization will be presented. The general idea is to use a prediction of the steady state, for computation of the cost...

  19. Model-based processing for underwater acoustic arrays

    CERN Document Server

    Sullivan, Edmund J

    2015-01-01

    This monograph presents a unified approach to model-based processing for underwater acoustic arrays. The use of physical models in passive array processing is not a new idea, but it has been used on a case-by-case basis, and as such, lacks any unifying structure. This work views all such processing methods as estimation procedures, which then can be unified by treating them all as a form of joint estimation based on a Kalman-type recursive processor, which can be recursive either in space or time, depending on the application. This is done for three reasons. First, the Kalman filter provides a natural framework for the inclusion of physical models in a processing scheme. Second, it allows poorly known model parameters to be jointly estimated along with the quantities of interest. This is important, since in certain areas of array processing already in use, such as those based on matched-field processing, the so-called mismatch problem either degrades performance or, indeed, prevents any solution at all. Third...

  20. Model based risk assessment - the CORAS framework

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjoern Axel; Fredriksen, Rune; Thunem, Atoosa P-J.

    2004-04-15

    Traditional risk analysis and assessment is based on failure-oriented models of the system. In contrast to this, model-based risk assessment (MBRA) utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The target models are then used as input sources for complementary risk analysis and assessment techniques, as well as a basis for the documentation of the assessment results. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tested with successful outcome through a series of seven trial within the telemedicine and ecommerce areas. The CORAS project in general and the CORAS application of MBRA in particular have contributed positively to the visibility of model-based risk assessment and thus to the disclosure of several potentials for further exploitation of various aspects within this important research field. In that connection, the CORAS methodology's possibilities for further improvement towards utilization in more complex architectures and also in other application domains such as the nuclear field can be addressed. The latter calls for adapting the framework to address nuclear standards such as IEC 60880 and IEC 61513. For this development we recommend applying a trial driven approach within the nuclear field. The tool supported approach for combining risk analysis and system development also fits well with the HRP proposal for developing an Integrated Design Environment (IDE) providing efficient methods and tools to support control room systems design. (Author)

  1. FAST: FAST Analysis of Sequences Toolbox

    Directory of Open Access Journals (Sweden)

    Travis J. Lawrence

    2015-05-01

    Full Text Available FAST (FAST Analysis of Sequences Toolbox provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU’s Not Unix Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics makes FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format. Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought.

  2. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  3. 3-D model-based vehicle tracking.

    Science.gov (United States)

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  4. Intellectual Model-Based Configuration Management Conception

    Directory of Open Access Journals (Sweden)

    Bartusevics Arturs

    2014-07-01

    Full Text Available Software configuration management is one of the most important disciplines within the software development project, which helps control the software evolution process and allows including into the end project only tested and validated changes. To achieve this, software management completes certain tasks. Concrete tools are used for technical implementation of tasks, such as version control systems, servers of continuous integration, compilers, etc. A correct configuration management process usually requires several tools, which mutually exchange information by generating various kinds of transfers. When it comes to introducing the configuration management process, often there are situations when tool installation is started, yet at that given moment there is no general picture of the total process. The article offers a model-based configuration management concept, which foresees the development of an abstract model for the configuration management process that later is transformed to lower abstraction level models and tools are indicated to support the technical process. A solution of this kind allows a more rational introduction and configuration of tools

  5. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    Science.gov (United States)

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  6. Fast reactor programme

    International Nuclear Information System (INIS)

    Hoekstra, E.K.

    1976-11-01

    Estimated reactivity effects of fission products in the SNR-300 fast breeder are given. Neutron cross sections of 127 I and 129 I are also given. Results of the in-pile canning failure experiments on fuel pins R54-F35 and F39 are discussed. Sinter experiments using mixed UC-UN powders are reported. Results of tensile tests on high-dose and low-dose irradiated specimens of 18Cr1 1Ni stainless steel (DIN 1.4948) used in the SNR-300 reactor vessel are given. It is shown that the aerosol behaviour in condensing sodium vapour can be described by the same MADCA model developed for the decay of aerosols in condensing water vapour. Results of heat transfer measurements in the electrically heated 28-rod bundle under liquid-phase and subsequently under two-phase conditions are commented on

  7. Fasting and rheumatic diseases

    OpenAIRE

    Mohammad Hassan Jokar

    2015-01-01

    Fasting is one of the important religious practices of Muslims, in which the individuals abstain from eating and drinking from dawn to sunset. Fasting is not obligatory or even not allowed, in case it causes health problems to the fasting individual. Rheumatic diseases are a major group of chronic diseases which can bring about numerous problems while fasting. The aim of this article is to review the impact of Islamic fasting on rheumatic patients, based on the scientific evidences.

  8. Model based control of refrigeration systems

    Energy Technology Data Exchange (ETDEWEB)

    Sloth Larsen, L.F.

    2005-11-15

    The subject for this Ph.D. thesis is model based control of refrigeration systems. Model based control covers a variety of different types of controls, that incorporates mathematical models. In this thesis the main subject therefore has been restricted to deal with system optimizing control. The optimizing control is divided into two layers, where the system oriented top layers deals with set-point optimizing control and the lower layer deals with dynamical optimizing control in the subsystems. The thesis has two main contributions, i.e. a novel approach for set-point optimization and a novel approach for desynchronization based on dynamical optimization. The focus in the development of the proposed set-point optimizing control has been on deriving a simple and general method, that with ease can be applied on various compositions of the same class of systems, such as refrigeration systems. The method is based on a set of parameter depended static equations describing the considered process. By adapting the parameters to the given process, predict the steady state and computing a steady state gradient of the cost function, the process can be driven continuously towards zero gradient, i.e. the optimum (if the cost function is convex). The method furthermore deals with system constrains by introducing barrier functions, hereby the best possible performance taking the given constrains in to account can be obtained, e.g. under extreme operational conditions. The proposed method has been applied on a test refrigeration system, placed at Aalborg University, for minimization of the energy consumption. Here it was proved that by using general static parameter depended system equations it was possible drive the set-points close to the optimum and thus reduce the power consumption with up to 20%. In the dynamical optimizing layer the idea is to optimize the operation of the subsystem or the groupings of subsystems, that limits the obtainable system performance. In systems

  9. Model Based Autonomy for Robust Mars Operations

    Science.gov (United States)

    Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.

  10. A prospective audit of preprocedural fasting practices on a transplant ward: when fasting becomes starving.

    Science.gov (United States)

    Vidot, Helen; Teevan, Kate; Carey, Sharon; Strasser, Simone; Shackel, Nicholas

    2016-03-01

    To investigate the prevalence and duration of preprocedural medically ordered fasting during a period of hospitalisation in an Australian population of patients with hepatic cirrhosis or following liver transplantation and to identify potential solutions to reduce fasting times. Protein-energy malnutrition is a common finding in patients with hepatic cirrhosis and can impact significantly on survival and quality of life. Protein and energy requirements in patients with cirrhosis are higher than those of healthy individuals. A significant feature of cirrhosis is the induction of starvation metabolism following seven to eight hours of food deprivation. Many investigative and interventional procedures for patients with cirrhosis necessitate a period of fasting to comply with anaesthesia guidelines. An observational study of the fasting episodes for 34 hospitalised patients with hepatic cirrhosis or following liver transplantation. Nutritional status was estimated using subjective global assessment and handgrip strength. The prevalence and duration of fasting practices for diagnostic or investigational procedures were estimated using electronic records and patient notes. Thirty-three patients (97%) were malnourished. Twenty-two patients (65%) were fasted during the observation period. There were 43 occasions of fasting with a median fasting time of 13·5 hours. On 40 occasions fasting times exceeded the maximum six-hour guideline recommended prior to the administration of anaesthesia by the majority of Anaesthesiology Societies. The majority of procedures (77%) requiring fasting occurred after midday. Eating breakfast on the day of the procedure reduced fasting time by 45%. Medically ordered preprocedural fasting times almost always exceed existing guidelines in this nutritionally compromised group. Adherence to fasting guidelines and eating breakfast before the procedure can reduce fasting times significantly and avoid the potential induction of starvation metabolism

  11. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  12. Fast determination of plasma parameters

    International Nuclear Information System (INIS)

    Wijnands, T.J.; Parlange, F.; Joffrin, E.

    1995-01-01

    Fast analysis of diagnostic signals of a tokamak discharge is demonstrated by using 4 fundamentally different techniques. A comparison between Function Parametrization (FP), Canonical Correlation Analysis (CCA) and a particular Neural Network (NN) configuration known as the Multi Layer Perceptron (MLP) is carried out, thereby taking a unique linear model based on a Singular Value Decomposition (SVD) as a reference. The various techniques provide all functional representations of characteristic plasma parameters in terms of the values of the measurements and are based on an analysis of a large, experimentally achieved database. A brief mathematical description of the various techniques is given, followed by two particular applications to Tore Supra diagnostic data. The first problem is concerned with the identification of the plasma boundary parameters using the poloidal field and differential poloidal flux measurements. A second application involves the interpretation of line integrated data from the multichannel interfero-polarimeter to obtain the central value of the safety factor. (author) 4 refs.; 3 figs

  13. Mars 2020 Model Based Systems Engineering Pilot

    Science.gov (United States)

    Dukes, Alexandra Marie

    2017-01-01

    The pilot study is led by the Integration Engineering group in NASA's Launch Services Program (LSP). The Integration Engineering (IE) group is responsible for managing the interfaces between the spacecraft and launch vehicle. This pilot investigates the utility of Model-Based Systems Engineering (MBSE) with respect to managing and verifying interface requirements. The main objectives of the pilot are to model several key aspects of the Mars 2020 integrated operations and interface requirements based on the design and verification artifacts from Mars Science Laboratory (MSL) and to demonstrate how MBSE could be used by LSP to gain further insight on the interface between the spacecraft and launch vehicle as well as to enhance how LSP manages the launch service. The method used to accomplish this pilot started through familiarization of SysML, MagicDraw, and the Mars 2020 and MSL systems through books, tutorials, and NASA documentation. MSL was chosen as the focus of the model since its processes and verifications translate easily to the Mars 2020 mission. The study was further focused by modeling specialized systems and processes within MSL in order to demonstrate the utility of MBSE for the rest of the mission. The systems chosen were the In-Flight Disconnect (IFD) system and the Mass Properties process. The IFD was chosen as a system of focus since it is an interface between the spacecraft and launch vehicle which can demonstrate the usefulness of MBSE from a system perspective. The Mass Properties process was chosen as a process of focus since the verifications for mass properties occur throughout the lifecycle and can demonstrate the usefulness of MBSE from a multi-discipline perspective. Several iterations of both perspectives have been modeled and evaluated. While the pilot study will continue for another 2 weeks, pros and cons of using MBSE for LSP IE have been identified. A pro of using MBSE includes an integrated view of the disciplines, requirements, and

  14. Model-based accelerator controls: What, why and how

    International Nuclear Information System (INIS)

    Sidhu, S.S.

    1987-01-01

    Model-based control is defined as a gamut of techniques whose aim is to improve the reliability of an accelerator and enhance the capabilities of the operator, and therefore of the whole control system. The aim of model-based control is seen as gradually moving the function of model-reference from the operator to the computer. The role of the operator in accelerator control and the need for and application of model-based control are briefly summarized

  15. HCUP Fast Stats

    Data.gov (United States)

    U.S. Department of Health & Human Services — HCUP Fast Stats provides easy access to the latest HCUP-based statistics for health information topics. HCUP Fast Stats uses visual statistical displays in...

  16. Fast food (image)

    Science.gov (United States)

    Fast foods are quick, reasonably priced, and readily available alternatives to home cooking. While convenient and economical for a busy lifestyle, fast foods are typically high in calories, fat, saturated fat, ...

  17. Fast food tips (image)

    Science.gov (United States)

    ... challenge to eat healthy when going to a fast food place. In general, avoiding items that are deep ... challenge to eat healthy when going to a fast food place. In general, avoiding items that are deep ...

  18. Cognitive components underpinning the development of model-based learning.

    Science.gov (United States)

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2017-06-01

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. A Neuro-Fuzzy Multi Swarm FastSLAM Framework

    OpenAIRE

    Havangi, R.; Teshnehlab, M.; Nekoui, M. A.

    2010-01-01

    FastSLAM is a framework for simultaneous localization using a Rao-Blackwellized particle filter. In FastSLAM, particle filter is used for the mobile robot pose (position and orientation) estimation, and an Extended Kalman Filter (EKF) is used for the feature location's estimation. However, FastSLAM degenerates over time. This degeneracy is due to the fact that a particle set estimating the pose of the robot loses its diversity. One of the main reasons for loosing particle diversity in FastSLA...

  20. Neighborhood fast food restaurants and fast food consumption: a national study.

    Science.gov (United States)

    Richardson, Andrea S; Boone-Heinonen, Janne; Popkin, Barry M; Gordon-Larsen, Penny

    2011-07-08

    Recent studies suggest that neighborhood fast food restaurant availability is related to greater obesity, yet few studies have investigated whether neighborhood fast food restaurant availability promotes fast food consumption. Our aim was to estimate the effect of neighborhood fast food availability on frequency of fast food consumption in a national sample of young adults, a population at high risk for obesity. We used national data from U.S. young adults enrolled in wave III (2001-02; ages 18-28) of the National Longitudinal Study of Adolescent Health (n = 13,150). Urbanicity-stratified multivariate negative binomial regression models were used to examine cross-sectional associations between neighborhood fast food availability and individual-level self-reported fast food consumption frequency, controlling for individual and neighborhood characteristics. In adjusted analysis, fast food availability was not associated with weekly frequency of fast food consumption in non-urban or low- or high-density urban areas. Policies aiming to reduce neighborhood availability as a means to reduce fast food consumption among young adults may be unsuccessful. Consideration of fast food outlets near school or workplace locations, factors specific to more or less urban settings, and the role of individual lifestyle attitudes and preferences are needed in future research.

  1. Neighborhood fast food restaurants and fast food consumption: A national study

    Directory of Open Access Journals (Sweden)

    Gordon-Larsen Penny

    2011-07-01

    Full Text Available Abstract Background Recent studies suggest that neighborhood fast food restaurant availability is related to greater obesity, yet few studies have investigated whether neighborhood fast food restaurant availability promotes fast food consumption. Our aim was to estimate the effect of neighborhood fast food availability on frequency of fast food consumption in a national sample of young adults, a population at high risk for obesity. Methods We used national data from U.S. young adults enrolled in wave III (2001-02; ages 18-28 of the National Longitudinal Study of Adolescent Health (n = 13,150. Urbanicity-stratified multivariate negative binomial regression models were used to examine cross-sectional associations between neighborhood fast food availability and individual-level self-reported fast food consumption frequency, controlling for individual and neighborhood characteristics. Results In adjusted analysis, fast food availability was not associated with weekly frequency of fast food consumption in non-urban or low- or high-density urban areas. Conclusions Policies aiming to reduce neighborhood availability as a means to reduce fast food consumption among young adults may be unsuccessful. Consideration of fast food outlets near school or workplace locations, factors specific to more or less urban settings, and the role of individual lifestyle attitudes and preferences are needed in future research.

  2. Physiology of Ramadan fasting

    OpenAIRE

    Shokoufeh Bonakdaran

    2016-01-01

    Considering the emphasis of Islam on the importance of fasting, Muslims attempt to fast from dawn until sunset during the holy month of Ramadan. Fasting is associated with several benefits for normal and healthy individuals. However, it could pose high risks to the health of diabetic patients due to certain physiological changes. This study aimed to compare the physiological changes associated with fasting in healthy individuals and diabetic patients during Ramadan. Furthermore, we reviewed t...

  3. Profile control simulations and experiments on TCV: a controller test environment and results using a model-based predictive controller

    Science.gov (United States)

    Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team

    2017-12-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.

  4. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  5. A General Accelerated Degradation Model Based on the Wiener Process

    Directory of Open Access Journals (Sweden)

    Le Liu

    2016-12-01

    Full Text Available Accelerated degradation testing (ADT is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  6. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  7. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  8. The prototype fast reactor

    International Nuclear Information System (INIS)

    Broomfield, A.M.

    1985-01-01

    The paper concerns the Prototype Fast Reactor (PFR), which is a liquid metal cooled fast reactor power station, situated at Dounreay, Scotland. The principal design features of a Fast Reactor and the PFR are given, along with key points of operating history, and health and safety features. The role of the PFR in the development programme for commercial reactors is discussed. (U.K.)

  9. The fast reactor

    International Nuclear Information System (INIS)

    1980-02-01

    The subject is discussed as follows: brief description of fast reactors; advantage in conserving uranium resources; experience, in UK and elsewhere, in fast reactor design, construction and operation; safety; production of plutonium, security aspects; consideration of future UK fast reactor programme. (U.K.)

  10. Ramadan, fasting and pregnancy

    DEFF Research Database (Denmark)

    Ahmed, Urfan Zahoor; Lykke, Jacob Alexander

    2014-01-01

    In Islam, the month of Ramadan is a period of fasting lasting 29 or 30 days. Epidemiological studies among Muslims in Denmark have not been conducted, but studies show, that fasting among pregnant Muslim women is common. Fasting does not increase the risk of growth restriction or preterm delivery...

  11. Promotion and Fast Food Demand: Where's the Beef?

    OpenAIRE

    Richards, Timothy J.; Padilla, Luis

    2007-01-01

    Many believe that fast food promotion is a significant cause of the obesity epidemic in North America. Industry members argue that promotion only reallocates brand shares and does not increase overall demand. This study weighs into the debate by specifying and estimating a discrete/continuous model of fast food restaurant choice and food expenditure that explicitly accounts for both spatial and temporal determinants of demand. Estimates are obtained using a unique panel of Canadian fast food ...

  12. Ramadan, fasting and pregnancy

    DEFF Research Database (Denmark)

    Ahmed, Urfan Zahoor; Lykke, Jacob Alexander

    2014-01-01

    In Islam, the month of Ramadan is a period of fasting lasting 29 or 30 days. Epidemiological studies among Muslims in Denmark have not been conducted, but studies show, that fasting among pregnant Muslim women is common. Fasting does not increase the risk of growth restriction or preterm delivery......, but there are reports of decreased foetal movements. Furthermore, the fasting may have long-term health consequences for the offspring, especially when they reach their middle age. According to Islam and the interpretation, pregnant and breast-feeding women are allowed to postpone the fasting of the month of Ramadan...

  13. Ramadan, faste og graviditet

    DEFF Research Database (Denmark)

    Ahmed, Urfan Zahoor; Lykke, Jacob Alexander

    2014-01-01

    In Islam, the month of Ramadan is a period of fasting lasting 29 or 30 days. Epidemiological studies among Muslims in Denmark have not been conducted, but studies show, that fasting among pregnant Muslim women is common. Fasting does not increase the risk of growth restriction or preterm delivery......, but there are reports of decreased foetal movements. Furthermore, the fasting may have long-term health consequences for the offspring, especially when they reach their middle age. According to Islam and the interpretation, pregnant and breast-feeding women are allowed to postpone the fasting of the month of Ramadan...

  14. HB-Line Special Nuclear Material Campaigns: Model-Based Project Management

    International Nuclear Information System (INIS)

    CHANG, ROBERT

    2004-01-01

    This study is to show how a model was used to enable management to better estimate production capabilities to ensure contract milestones/commitments are met, to cope with fast changing project baselines and project missions, to ensure the project will meet the negotiated throughput, and to eliminate unnecessary but costly design changes

  15. Enabling Accessibility Through Model-Based User Interface Development.

    Science.gov (United States)

    Ziegler, Daniel; Peissner, Matthias

    2017-01-01

    Adaptive user interfaces (AUIs) can increase the accessibility of interactive systems. They provide personalized display and interaction modes to fit individual user needs. Most AUI approaches rely on model-based development, which is considered relatively demanding. This paper explores strategies to make model-based development more attractive for mainstream developers.

  16. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  17. Towards automatic model based controller design for reconfigurable plants

    DEFF Research Database (Denmark)

    Michelsen, Axel Gottlieb; Stoustrup, Jakob; Izadi-Zamanabadi, Roozbeh

    2008-01-01

    This paper introduces model-based Plug and Play Process Control, a novel concept for process control, which allows a model-based control system to be reconfigured when a sensor or an actuator is plugged into a controlled process. The work reported in this paper focuses on composing a monolithic m...

  18. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  19. Learning of Chemical Equilibrium through Modelling-Based Teaching

    Science.gov (United States)

    Maia, Poliana Flavia; Justi, Rosaria

    2009-01-01

    This paper presents and discusses students' learning process of chemical equilibrium from a modelling-based approach developed from the use of the "Model of Modelling" diagram. The investigation was conducted in a regular classroom (students 14-15 years old) and aimed at discussing how modelling-based teaching can contribute to students…

  20. Mechanics and model-based control of advanced engineering systems

    CERN Document Server

    Irschik, Hans; Krommer, Michael

    2014-01-01

    Mechanics and Model-Based Control of Advanced Engineering Systems collects 32 contributions presented at the International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines, which took place in St. Petersburg, Russia in July 2012. The workshop continued a series of international workshops, which started with a Japan-Austria Joint Workshop on Mechanics and Model Based Control of Smart Materials and Structures and a Russia-Austria Joint Workshop on Advanced Dynamics and Model Based Control of Structures and Machines. In the present volume, 10 full-length papers based on presentations from Russia, 9 from Austria, 8 from Japan, 3 from Italy, one from Germany and one from Taiwan are included, which represent the state of the art in the field of mechanics and model based control, with particular emphasis on the application of advanced structures and machines.

  1. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  2. Model-based PEEP optimisation in mechanical ventilation

    Directory of Open Access Journals (Sweden)

    Chiew Yeong Shiong

    2011-12-01

    Full Text Available Abstract Background Acute Respiratory Distress Syndrome (ARDS patients require mechanical ventilation (MV for breathing support. Patient-specific PEEP is encouraged for treating different patients but there is no well established method in optimal PEEP selection. Methods A study of 10 patients diagnosed with ALI/ARDS whom underwent recruitment manoeuvre is carried out. Airway pressure and flow data are used to identify patient-specific constant lung elastance (Elung and time-variant dynamic lung elastance (Edrs at each PEEP level (increments of 5cmH2O, for a single compartment linear lung model using integral-based methods. Optimal PEEP is estimated using Elung versus PEEP, Edrs-Pressure curve and Edrs Area at minimum elastance (maximum compliance and the inflection of the curves (diminishing return. Results are compared to clinically selected PEEP values. The trials and use of the data were approved by the New Zealand South Island Regional Ethics Committee. Results Median absolute percentage fitting error to the data when estimating time-variant Edrs is 0.9% (IQR = 0.5-2.4 and 5.6% [IQR: 1.8-11.3] when estimating constant Elung. Both Elung and Edrs decrease with PEEP to a minimum, before rising, and indicating potential over-inflation. Median Edrs over all patients across all PEEP values was 32.2 cmH2O/l [IQR: 26.1-46.6], reflecting the heterogeneity of ALI/ARDS patients, and their response to PEEP, that complicates standard approaches to PEEP selection. All Edrs-Pressure curves have a clear inflection point before minimum Edrs, making PEEP selection straightforward. Model-based selected PEEP using the proposed metrics were higher than clinically selected values in 7/10 cases. Conclusion Continuous monitoring of the patient-specific Elung and Edrs and minimally invasive PEEP titration provide a unique, patient-specific and physiologically relevant metric to optimize PEEP selection with minimal disruption of MV therapy.

  3. Analytical model for fast-shock ignition

    International Nuclear Information System (INIS)

    Ghasemi, S. A.; Farahbod, A. H.; Sobhanian, S.

    2014-01-01

    A model and its improvements are introduced for a recently proposed approach to inertial confinement fusion, called fast-shock ignition (FSI). The analysis is based upon the gain models of fast ignition, shock ignition and considerations for the fast electrons penetration into the pre-compressed fuel to examine the formation of an effective central hot spot. Calculations of fast electrons penetration into the dense fuel show that if the initial electron kinetic energy is of the order ∼4.5 MeV, the electrons effectively reach the central part of the fuel. To evaluate more realistically the performance of FSI approach, we have used a quasi-two temperature electron energy distribution function of Strozzi (2012) and fast ignitor energy formula of Bellei (2013) that are consistent with 3D PIC simulations for different values of fast ignitor laser wavelength and coupling efficiency. The general advantages of fast-shock ignition in comparison with the shock ignition can be estimated to be better than 1.3 and it is seen that the best results can be obtained for the fuel mass around 1.5 mg, fast ignitor laser wavelength ∼0.3  micron and the shock ignitor energy weight factor about 0.25

  4. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    Science.gov (United States)

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  5. Model-Based Battery Management Systems: From Theory to Practice

    Science.gov (United States)

    Pathak, Manan

    Lithium-ion batteries are now extensively being used as the primary storage source. Capacity and power fade, and slow recharging times are key issues that restrict its use in many applications. Battery management systems are critical to address these issues, along with ensuring its safety. This dissertation focuses on exploring various control strategies using detailed physics-based electrochemical models developed previously for lithium-ion batteries, which could be used in advanced battery management systems. Optimal charging profiles for minimizing capacity fade based on SEI-layer formation are derived and the benefits of using such control strategies are shown by experimentally testing them on a 16 Ah NMC-based pouch cell. This dissertation also explores different time-discretization strategies for non-linear models, which gives an improved order of convergence for optimal control problems. Lastly, this dissertation also explores a physics-based model for predicting the linear impedance of a battery, and develops a freeware that is extremely robust and computationally fast. Such a code could be used for estimating transport, kinetic and material properties of the battery based on the linear impedance spectra.

  6. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    Science.gov (United States)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  7. The fast breeder reactor

    International Nuclear Information System (INIS)

    Collier, J.

    1990-01-01

    The arguments for and against the fast breeder reactor are debated. The case for the fast reactor is that the world energy demand will increase due to increasing population over the next forty years and that the damage to the global environment from burning fossil fuels which contribute to the greenhouse effect. Nuclear fission is the only large scale energy source which can achieve a cut in the use of carbon based fuels although energy conservation and renewable sources will also be important. Fast reactors produce more energy from uranium than other types of (thermal) reactors such as AGRs and PWRs. Fast reactors would be important from about 2020 onwards especially as by then many thermal reactors will need to be replaced. Fast reactors are also safer than normal reactors. The arguments against fast reactors are largely economic. The cost, especially the capital cost is very high. The viability of the technology is also questioned. (UK)

  8. The fast breeder reactor

    International Nuclear Information System (INIS)

    Davis, D.A.; Baker, M.A.W.; Hall, R.S.

    1990-01-01

    Following submission of written evidence, the Energy Committee members asked questions of three witnesses from the Central Electricity Generating Board and Nuclear Electric (which will be the government owned company running nuclear power stations after privatisation). Both questions and answers are reported verbatim. The points raised include where the responsibility for the future fast reactor programme should lie, with government only or with private enterprise or both and the viability of fast breeder reactors in the future. The case for the fast reactor was stated as essentially strategic not economic. This raised the issue of nuclear cost which has both a construction and a decommissioning element. There was considerable discussion as to the cost of building a European Fast reactor and the cost of the electricity it would generate compared with PWR type reactors. The likely demand for fast reactors will not arrive for 20-30 years and the need to build a fast reactor now is questioned. (UK)

  9. A satellite and model based flood inundation climatology of Australia

    Science.gov (United States)

    Schumann, G.; Andreadis, K.; Castillo, C. J.

    2013-12-01

    To date there is no coherent and consistent database on observed or simulated flood event inundation and magnitude at large scales (continental to global). The only compiled data set showing a consistent history of flood inundation area and extent at a near global scale is provided by the MODIS-based Dartmouth Flood Observatory. However, MODIS satellite imagery is only available from 2000 and is hampered by a number of issues associated with flood mapping using optical images (e.g. classification algorithms, cloud cover, vegetation). Here, we present for the first time a proof-of-concept study in which we employ a computationally efficient 2-D hydrodynamic model (LISFLOOD-FP) complemented with a sub-grid channel formulation to generate a complete flood inundation climatology of the past 40 years (1973-2012) for the entire Australian continent. The model was built completely from freely available SRTM-derived data, including channel widths, bank heights and floodplain topography, which was corrected for vegetation canopy height using a global ICESat canopy dataset. Channel hydraulics were resolved using actual channel data and bathymetry was estimated within the model using hydraulic geometry. On the floodplain, the model simulated the flow paths and inundation variables at a 1 km resolution. The developed model was run over a period of 40 years and a floodplain inundation climatology was generated and compared to satellite flood event observations. Our proof-of-concept study demonstrates that this type of model can reliably simulate past flood events with reasonable accuracies both in time and space. The Australian model was forced with both observed flow climatology and VIC-simulated flows in order to assess the feasibility of a model-based flood inundation climatology at the global scale.

  10. A human motion model based on maps for navigation systems

    Directory of Open Access Journals (Sweden)

    Kaiser Susanna

    2011-01-01

    Full Text Available Abstract Foot-mounted indoor positioning systems work remarkably well when using additionally the knowledge of floor-plans in the localization algorithm. Walls and other structures naturally restrict the motion of pedestrians. No pedestrian can walk through walls or jump from one floor to another when considering a building with different floor-levels. By incorporating known floor-plans in sequential Bayesian estimation processes such as particle filters (PFs, long-term error stability can be achieved as long as the map is sufficiently accurate and the environment sufficiently constraints pedestrians' motion. In this article, a new motion model based on maps and floor-plans is introduced that is capable of weighting the possible headings of the pedestrian as a function of the local environment. The motion model is derived from a diffusion algorithm that makes use of the principle of a source effusing gas and is used in the weighting step of a PF implementation. The diffusion algorithm is capable of including floor-plans as well as maps with areas of different degrees of accessibility. The motion model more effectively represents the probability density function of possible headings that are restricted by maps and floor-plans than a simple binary weighting of particles (i.e., eliminating those that crossed walls and keeping the rest. We will show that the motion model will help for obtaining better performance in critical navigation scenarios where two or more modes may be competing for some of the time (multi-modal scenarios.

  11. Fast reactors worldwide

    International Nuclear Information System (INIS)

    Hall, R.S.; Vignon, D.

    1985-01-01

    The paper concerns the evolution of fast reactors over the past 30 years, and their present status. Fast reactor development in different countries is described, and the present position, with emphasis on cost reduction and collaboration, is examined. The French development of the fast breeder type reactor is reviewed, and includes: the acquisition of technical skills, the search for competitive costs and the spx2 project, and more advanced designs. Future prospects are also discussed. (U.K.)

  12. Fast breeder reactors

    International Nuclear Information System (INIS)

    Heinzel, V.

    1975-01-01

    The author gives a survey of 'fast breeder reactors'. In detail the process of breeding, the reasons for the development of fast breeders, the possible breeder reactors, the design criteria, fuels, cladding, coolant, and safety aspects are reported on. Design data of some experimental reactors already in operation are summarized in stabular form. 300 MWe Prototype-Reactors SNR-300 and PFR are explained in detail and data of KWU helium-cooled fast breeder reactors are given. (HR) [de

  13. Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion

    International Nuclear Information System (INIS)

    Li, Maokun; Abubakar, Aria; Habashy, Tarek M

    2010-01-01

    In this paper, we apply a model-based inversion scheme for the interpretation of the crosswell electromagnetic data. In this approach, we use open and closed polygons to parameterize the unknown configuration. The parameters that define these polygons are then inverted for by minimizing the data misfit cost function. Compared with the pixel-based inversion approach, the model-based inversion uses only a few number of parameters; hence, it is more efficient. Furthermore, with sufficient sensitivity in the data, the model-based approach can provide quantitative estimates of the inverted parameters such as the conductivity. The model-based inversion also provides a convenient way to incorporate a priori information from other independent measurements such as seismic, gravity and well logs

  14. Fast wave current drive

    International Nuclear Information System (INIS)

    Goree, J.; Ono, M.; Colestock, P.; Horton, R.; McNeill, D.; Park, H.

    1985-07-01

    Fast wave current drive is demonstrated in the Princeton ACT-I toroidal device. The fast Alfven wave, in the range of high ion-cyclotron harmonics, produced 40 A of current from 1 kW of rf power coupled into the plasma by fast wave loop antenna. This wave excites a steady current by damping on the energetic tail of the electron distribution function in the same way as lower-hybrid current drive, except that fast wave current drive is appropriate for higher plasma densities

  15. A Full-Body Layered Deformable Model for Automatic Model-Based Gait Recognition

    Science.gov (United States)

    Lu, Haiping; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2007-12-01

    This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.

  16. A Model-based Avionic Prognostic Reasoner (MAPR)

    Data.gov (United States)

    National Aeronautics and Space Administration — The Model-based Avionic Prognostic Reasoner (MAPR) presented in this paper is an innovative solution for non-intrusively monitoring the state of health (SoH) and...

  17. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  18. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  19. Model-based Prognostics with Concurrent Damage Progression Processes

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based prognostics approaches rely on physics-based models that describe the behavior of systems and their components. These models must account for the several...

  20. Model-based reasoning technology for the power industry

    International Nuclear Information System (INIS)

    Touchton, R.A.; Subramanyan, N.S.; Naser, J.A.

    1991-01-01

    This paper reports on model-based reasoning which refers to an expert system implementation methodology that uses a model of the system which is being reasoned about. Model-based representation and reasoning techniques offer many advantages and are highly suitable for domains where the individual components, their interconnection, and their behavior is well-known. Technology Applications, Inc. (TAI), under contract to the Electric Power Research Institute (EPRI), investigated the use of model-based reasoning in the power industry including the nuclear power industry. During this project, a model-based monitoring and diagnostic tool, called ProSys, was developed. Also, an alarm prioritization system was developed as a demonstration prototype

  1. Model-based Prognostics with Fixed-lag Particle Filters

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based prognostics exploits domain knowl- edge of the system, its components, and how they fail by casting the underlying physical phenom- ena in a...

  2. Fuzzy model-based control of a nuclear reactor

    International Nuclear Information System (INIS)

    Van Den Durpel, L.; Ruan, D.

    1994-01-01

    The fuzzy model-based control of a nuclear power reactor is an emerging research topic world-wide. SCK-CEN is dealing with this research in a preliminary stage, including two aspects, namely fuzzy control and fuzzy modelling. The aim is to combine both methodologies in contrast to conventional model-based PID control techniques, and to state advantages of including fuzzy parameters as safety and operator feedback. This paper summarizes the general scheme of this new research project

  3. System Dynamics as Model-Based Theory Building

    OpenAIRE

    Schwaninger, Markus; Grösser, Stefan N.

    2008-01-01

    This paper introduces model-based theory building as a feature of system dynamics (SD) with large potential. It presents a systemic approach to actualizing that potential, thereby opening up a new perspective on theory building in the social sciences. The question addressed is if and how SD enables the construction of high-quality theories. This contribution is based on field experiment type projects which have been focused on model-based theory building, specifically the construction of a mi...

  4. Model-based Sensor Data Acquisition and Management

    OpenAIRE

    Aggarwal, Charu C.; Sathe, Saket; Papaioannou, Thanasis G.; Jeung, Ho Young; Aberer, Karl

    2012-01-01

    In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition...

  5. Fast multichannel analyser

    Energy Technology Data Exchange (ETDEWEB)

    Berry, A; Przybylski, M M; Sumner, I [Science Research Council, Daresbury (UK). Daresbury Lab.

    1982-10-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10/sup 7/ s/sup -1/ has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format.

  6. A fast multichannel analyser

    International Nuclear Information System (INIS)

    Berry, A.; Przybylski, M.M.; Sumner, I.

    1982-01-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10 7 s -1 has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format. (orig.)

  7. Sociodemographic differences in fast food price sensitivity

    Science.gov (United States)

    Meyer, Katie A.; Guilkey, David K.; Ng, Shu Wen; Duffey, Kiyah J.; Popkin, Barry M.; Kiefe, Catarina I.; Steffen, Lyn M.; Shikany, James M.; Gordon-Larsen, Penny

    2014-01-01

    Importance Fiscal food policies (e.g., taxation) are increasingly proposed to improve population-level health, but their impact on health disparities is unknown. Objective We estimated subgroup-specific effects of fast food price changes on fast food consumption and cardio-metabolic outcomes, hypothesizing inverse associations between fast food price with fast food consumption, BMI, and insulin resistance and stronger associations among blacks (vs. whites) and participants with relatively lower education or income. Design 20-year follow-up (5 exams) in a biracial U.S. prospective cohort: Coronary Artery Risk Development in Young Adults (CARDIA) (1985/86–2005/06, baseline n=5,115). Participants Aged 18–30 at baseline; designed for equal recruitment by race (black/white), educational attainment, age, and gender. Exposures Community-level price data from the Council for Community and Economic Research (C2ER) temporally- and geographically-linked to study participants’ home address at each exam. Main outcome and measures Participant-reported number of fast food eating occasions per week; BMI (kg/m2) from clinical assessment of weight and height; homeostatic model assessment insulin resistance (HOMA-IR) from fasting glucose and insulin. Covariates included individual- and community-level social and demographic factors. Results In repeated measures regression, multivariable-adjusted associations between fast food price and consumption were non-linear (quadratic, pconsumption at higher prices; estimates varied according to race (interaction term p=0.04), income (p=0.07), and education (p=0.03). For example, at the 10th percentile of price ($1.25/serving), blacks and whites had mean fast food consumption (times/week) of 2.2 (95% CI: 2.1–2.3) and 1.6 (1.5–1.7), respectively, while at the 90th percentile of price ($1.53/serving), respective mean consumption estimates were 1.9 (1.8–2.0) and 1.5 (1.4–1.6). We observed differential price effects on HOMA

  8. Islamic Fasting and Diabetes

    Directory of Open Access Journals (Sweden)

    Fereidoun Azizi

    2013-07-01

    Full Text Available The aim of this article is to review health-related aspects of Ramadan fasting in normal individuals and diabetics. During fasting days of Ramadan, glucose homeostasis is maintained by meal taken bepore dawn and by liver glycogen stores. Changes in serum lipids are variable and defend on the quality and quantity of food consumption and changes in weight. Compliant, well controlled type 2 diabetics may observe Ramadan fasting; but fasting is not recommended for type 1, non complaint, poorly controlled and pregnant diabetics. Although Ramadan fasting is safe for all healthy individuals and well controlled diabetics, those with uncontrolled diabetics and diabetics with complications should consult physicians and follow scientific recommendations.

  9. Fast Spectrum Reactors

    CERN Document Server

    Todd, Donald; Tsvetkov, Pavel

    2012-01-01

    Fast Spectrum Reactors presents a detailed overview of world-wide technology contributing to the development of fast spectrum reactors. With a unique focus on the capabilities of fast spectrum reactors to address nuclear waste transmutation issues, in addition to the well-known capabilities of breeding new fuel, this volume describes how fast spectrum reactors contribute to the wide application of nuclear power systems to serve the global nuclear renaissance while minimizing nuclear proliferation concerns. Readers will find an introduction to the sustainable development of nuclear energy and the role of fast reactors, in addition to an economic analysis of nuclear reactors. A section devoted to neutronics offers the current trends in nuclear design, such as performance parameters and the optimization of advanced power systems. The latest findings on fuel management, partitioning and transmutation include the physics, efficiency and strategies of transmutation, homogeneous and heterogeneous recycling, in addit...

  10. Fast ejendom III

    DEFF Research Database (Denmark)

    Munk-Hansen, Carsten

    Bogen er det tredje bind af tre planlagte bind om fast ejendom: I Overdragelsen, II Bolighandlen og III Ejerbeføjelsen. Fremstillingens giver et grundigt overblik over centrale områder af en omfattende regulering af fast ejendom, med angivelse af litteratur, hvor læseren kan søge yderligere...... oplysning. En ejer af fast ejendom er på særdeles mange områder begrænset i sin råden sammenlignet med ejeren af et formuegode i almindelighed. Fremstillingen tager udgangspunkt i ejerens perspektiv (fremfor samfundets eller myndighedernes). Både den privatretlige og offentligretlige regulering behandles......, eksempelvis ejendomsdannelsen, servitutter, naboretten, hævd, zoneinddelingen, den fysiske planlægning, beskyttelse af natur, beskyttelse af kultur, forurening fra fast ejendom, erstatning for forurening, jordforurening, ekspropriation, byggeri og adgang til fast ejendom....

  11. Fast fission phenomena

    International Nuclear Information System (INIS)

    Gregoire, Christian.

    1982-03-01

    Experimental studies of fast fission phenomena are presented. The paper is divided into three parts. In the first part, problems associated with fast fission processes are examined in terms of interaction potentials and a dynamic model is presented in which highly elastic collisions, the formation of compound nuclei and fast fission appear naturally. In the second part, a description is given of the experimental methods employed, the observations made and the preliminary interpretation of measurements suggesting the occurence of fast fission processes. In the third part, our dynamic model is incorporated in a general theory of the dissipative processes studied. This theory enables fluctuations associated with collective variables to be calculated. It is applied to highly inelastic collisions, to fast fission and to the fission dynamics of compound nuclei (for which a schematic representation is given). It is with these calculations that the main results of the second part can be interpreted [fr

  12. Cancer Related-Knowledge - Small Area Estimates

    Science.gov (United States)

    These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.

  13. Nonparametric methods for volatility density estimation

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2009-01-01

    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on

  14. Yield loss prediction models based on early estimation of weed pressure

    DEFF Research Database (Denmark)

    Asif, Ali; Streibig, Jens Carl; Andreasen, Christian

    2013-01-01

    thresholds are more relevant for site-specific weed management, because weeds are unevenly distributed in fields. Precision of prediction of yield loss is influenced by various factors such as locations, yield potential at the site, variation in competitive ability of mix stands of weed species and emergence...

  15. Estimating Travel Time in Bank Filtration Systems from a Numerical Model Based on DTS Measurements.

    Science.gov (United States)

    des Tombe, Bas F; Bakker, Mark; Schaars, Frans; van der Made, Kees-Jan

    2018-03-01

    An approach is presented to determine the seasonal variations in travel time in a bank filtration system using a passive heat tracer test. The temperature in the aquifer varies seasonally because of temperature variations of the infiltrating surface water and at the soil surface. Temperature was measured with distributed temperature sensing along fiber optic cables that were inserted vertically into the aquifer with direct push equipment. The approach was applied to a bank filtration system consisting of a sequence of alternating, elongated recharge basins and rows of recovery wells. A SEAWAT model was developed to simulate coupled flow and heat transport. The model of a two-dimensional vertical cross section is able to simulate the temperature of the water at the well and the measured vertical temperature profiles reasonably well. MODPATH was used to compute flowpaths and the travel time distribution. At the study site, temporal variation of the pumping discharge was the dominant factor influencing the travel time distribution. For an equivalent system with a constant pumping rate, variations in the travel time distribution are caused by variations in the temperature-dependent viscosity. As a result, travel times increase in the winter, when a larger fraction of the water travels through the warmer, lower part of the aquifer, and decrease in the summer, when the upper part of the aquifer is warmer. © 2017 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  16. The Output Cost of Gender Discrimination: A Model-Based Macroeconomic Estimate

    OpenAIRE

    Cavalcanti, Tiago V. de V.; Tavares, José

    2008-01-01

    Gender-based discrimination is a pervasive and costly phenomenon. To a greater or lesser extent, all economies present a gender wage gap, associated with lower female labour force participation rates and higher fertility. This paper presents a growth model where saving, fertility and labour market participation are endogenously determined, and there is wage discrimination. The model is calibrated to mimic the performance of the U.S. economy, including the gender wage gap and relative female l...

  17. Model-based prognostics for batteries which estimates useful life and uses a probability density function

    Science.gov (United States)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)

    2012-01-01

    This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.

  18. Uncertainty Representation and Interpretation in Model-based Prognostics Algorithms based on Kalman Filter Estimation

    Science.gov (United States)

    2012-09-01

    interpreting the state vector as the health indicator and a threshold is used on this variable in order to compute EOL (end-of-life) and RUL. Here, we...End-of-life ( EOL ) would match the true spread and would not change from one experiment to another. This is, however, in practice impossible to achieve

  19. Model-based Estimation of High Frequency Jump Diffusions with Microstructure Noise and Stochastic Volatility

    NARCIS (Netherlands)

    Bos, Charles S.

    2008-01-01

    When analysing the volatility related to high frequency financial data, mostly non-parametric approaches based on realised or bipower variation are applied. This article instead starts from a continuous time diffusion model and derives a parametric analog at high frequency for it, allowing

  20. Model based estimation for multi-modal user interface component selection

    CSIR Research Space (South Africa)

    Coetzee, L

    2009-12-01

    Full Text Available and literacy level of the user should be taken into account. This paper presents one approach to develop a cost-based model which can be used to derive appropriate mappings for specific user profiles. The model is explained through a number of small examples...

  1. Model-based crosstalk compensation for simultaneous 99mTc/123I dual-isotope brain SPECT imaging.

    Science.gov (United States)

    Du, Yong; Tsui, Benjamin M W; Frey, Eric C

    2007-09-01

    In this work, we developed a model-based method to estimate and compensate for the crosstalk contamination in simultaneous 123I and 99mTc dual isotope brain single photo emission computed tomography imaging. The model-based crosstalk compensation (MBCC) includes detailed modeling of photon interactions inside both the object and the detector system. In the method, scatter in the object is modeled using the effective source scatter estimation technique, including contributions from all the photon emissions. The effects of the collimator-detector response, including the penetration and scatter components due to high-energy 123I photons, are modeled using precalculated tables of Monte Carlo simulated point-source response functions obtained from sources in air at various distances from the face of the collimator. The model-based crosstalk estimation method was combined with iterative reconstruction based compensation to reduce contamination due to crosstalk. The MBCC method was evaluated using Monte Carlo simulated and physical phantom experimentally acquired simultaneous dual-isotope data. Results showed that, for both experimental and simulation studies, the model-based method provided crosstalk estimates that were in good agreement with the true crosstalk. Compensation using MBCC improved image contrast and removed the artifacts for both Monte Carlo simulated and experimentally acquired data. The results were in good agreement with images acquired without any crosstalk contamination.

  2. Model-based crosstalk compensation for simultaneous Tc99m∕I123 dual-isotope brain SPECT imaging.

    Science.gov (United States)

    Du, Yong; Tsui, Benjamin M W; Frey, Eric C

    2007-09-01

    In this work, we developed a model-based method to estimate and compensate for the crosstalk contamination in simultaneous I123 and Tc99m dual isotope brain single photo emission computed tomography imaging. The model-based crosstalk compensation (MBCC) includes detailed modeling of photon interactions inside both the object and the detector system. In the method, scatter in the object is modeled using the effective source scatter estimation technique, including contributions from all the photon emissions. The effects of the collimator-detector response, including the penetration and scatter components due to high-energy I123 photons, are modeled using pre-calculated tables of Monte Carlo simulated point-source response functions obtained from sources in air at various distances from the face of the collimator. The model-based crosstalk estimation method was combined with iterative reconstruction based compensation to reduce contamination due to crosstalk. The MBCC method was evaluated using Monte Carlo simulated and physical phantom experimentally acquired simultaneous dual-isotope data. Results showed that, for both experimental and simulation studies, the model-based method provided crosstalk estimates that were in good agreement with the true crosstalk. Compensation using MBCC improved image contrast and removed the artifacts for both Monte Carlo simulated and experimentally acquired data. The results were in good agreement with images acquired without any crosstalk contamination. © 2007 American Association of Physicists in Medicine.

  3. Application of model-based and knowledge-based measuring methods as analytical redundancy

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Chaker, N.; Vandreier, B.

    1997-01-01

    The safe operation of nuclear power plants requires the application of modern and intelligent methods of signal processing for the normal operation as well as for the management of accident conditions. Such modern and intelligent methods are model-based and knowledge-based ones being founded on analytical knowledge (mathematical models) as well as experiences (fuzzy information). In addition to the existing hardware redundancies analytical redundancies will be established with the help of these modern methods. These analytical redundancies support the operating staff during the decision-making. The design of a hybrid model-based and knowledge-based measuring method will be demonstrated by the example of a fuzzy-supported observer. Within the fuzzy-supported observer a classical linear observer is connected with a fuzzy-supported adaptation of the model matrices of the observer model. This application is realized for the estimation of the non-measurable variables as steam content and mixture level within pressure vessels with water-steam mixture during accidental depressurizations. For this example the existing non-linearities will be classified and the verification of the model will be explained. The advantages of the hybrid method in comparison to the classical model-based measuring methods will be demonstrated by the results of estimation. The consideration of the parameters which have an important influence on the non-linearities requires the inclusion of high-dimensional structures of fuzzy logic within the model-based measuring methods. Therefore methods will be presented which allow the conversion of these high-dimensional structures to two-dimensional structures of fuzzy logic. As an efficient solution of this problem a method based on cascaded fuzzy controllers will be presented. (author). 2 refs, 12 figs, 5 tabs

  4. Fast reactor core monitoring device

    International Nuclear Information System (INIS)

    Sanda, Toshio; Inoue, Kotaro; Azekura, Kazuo.

    1982-01-01

    Purpose: To enable the rapid and accurate on-line identification of the state of a fast reactor core by effectively utilizing the measured data on the temperature and flow rate of the coolant. Constitution: The spacial power distribution and average assembly power are quickly calculated using an approximate calculating method, the measured values and the calculated values of the inlet and outlet temperature difference, flow rate and coolant physical values of an assembly are combined and are individually obtained, the most definite respective values and their errors are obtained by a least square method utilizing a formula of the relation between these values, and the power distribution and the temperature distribution of a reactor core are estimated in this manner. Accordingly, even when the measuring accuracy and the calculating accuracy are equal as in a fast reactor, the power distribution and the temperature distribution can be accurately estimated on-line at a high speed in a nuclear reactor, information required for the operator is provided, and the reactor can thus be safely and efficiently operated. (Yoshihara, H.)

  5. Embracing model-based designs for dose-finding trials.

    Science.gov (United States)

    Love, Sharon B; Brown, Sarah; Weir, Christopher J; Harbron, Chris; Yap, Christina; Gaschler-Markefski, Birgit; Matcham, James; Caffrey, Louise; McKevitt, Christopher; Clive, Sally; Craddock, Charlie; Spicer, James; Cornelius, Victoria

    2017-07-25

    Dose-finding trials are essential to drug development as they establish recommended doses for later-phase testing. We aim to motivate wider use of model-based designs for dose finding, such as the continual reassessment method (CRM). We carried out a literature review of dose-finding designs and conducted a survey to identify perceived barriers to their implementation. We describe the benefits of model-based designs (flexibility, superior operating characteristics, extended scope), their current uptake, and existing resources. The most prominent barriers to implementation of a model-based design were lack of suitable training, chief investigators' preference for algorithm-based designs (e.g., 3+3), and limited resources for study design before funding. We use a real-world example to illustrate how these barriers can be overcome. There is overwhelming evidence for the benefits of CRM. Many leading pharmaceutical companies routinely implement model-based designs. Our analysis identified barriers for academic statisticians and clinical academics in mirroring the progress industry has made in trial design. Unified support from funders, regulators, and journal editors could result in more accurate doses for later-phase testing, and increase the efficiency and success of clinical drug development. We give recommendations for increasing the uptake of model-based designs for dose-finding trials in academia.

  6. Fast Breeder Reactor studies

    International Nuclear Information System (INIS)

    Till, C.E.; Chang, Y.I.; Kittel, J.H.; Fauske, H.K.; Lineberry, M.J.; Stevenson, M.G.; Amundson, P.I.; Dance, K.D.

    1980-07-01

    This report is a compilation of Fast Breeder Reactor (FBR) resource documents prepared to provide the technical basis for the US contribution to the International Nuclear Fuel Cycle Evaluation. The eight separate parts deal with the alternative fast breeder reactor fuel cycles in terms of energy demand, resource base, technical potential and current status, safety, proliferation resistance, deployment, and nuclear safeguards. An Annex compares the cost of decommissioning light-water and fast breeder reactors. Separate abstracts are included for each of the parts

  7. Fast track-hoftealloplastik

    DEFF Research Database (Denmark)

    Hansen, Torben Bæk; Gromov, Kirill; Kristensen, Billy B

    2017-01-01

    Fast-track surgery implies a coordinated perioperative approach aimed at reducing surgical stress and facilitating post-operative recovery. The fast-track programme has reduced post-operative length of stay and has led to shorter convalescence with more rapid functional recovery and decreased...... morbidity and mortality in total hip arthroplasty. It should now be a standard total hip arthroplasty patient pathway, but fine tuning of the multiple factors in the fast-track pathway is still needed in patients with special needs or high comorbidity burden....

  8. Fast Breeder Reactor studies

    Energy Technology Data Exchange (ETDEWEB)

    Till, C.E.; Chang, Y.I.; Kittel, J.H.; Fauske, H.K.; Lineberry, M.J.; Stevenson, M.G.; Amundson, P.I.; Dance, K.D.

    1980-07-01

    This report is a compilation of Fast Breeder Reactor (FBR) resource documents prepared to provide the technical basis for the US contribution to the International Nuclear Fuel Cycle Evaluation. The eight separate parts deal with the alternative fast breeder reactor fuel cycles in terms of energy demand, resource base, technical potential and current status, safety, proliferation resistance, deployment, and nuclear safeguards. An Annex compares the cost of decommissioning light-water and fast breeder reactors. Separate abstracts are included for each of the parts.

  9. Attempt Quit Smoking 24+ Hours Maps and Data of Model-Based Small Area Estimates - Small Area Estimates

    Science.gov (United States)

    Attempt Quit Smoking 24+ Hours is defined as a person 18 years of age or older who must have reported smoking at least 100 cigarettes in his/her life, and now does not smoke at all but it has been less than 365 days since completely stopped smoking cigarettes, or now smoke everyday or some days but reported that have made attempt of quitting for more than 24 hours in the past 12 months.

  10. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  11. Model based process-product design and analysis

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    This paper gives a perspective on modelling and the important role it has within product-process design and analysis. Different modelling issues related to development and application of systematic model-based solution approaches for product-process design is discussed and the need for a hybrid...... model-based framework is highlighted. This framework should be able to manage knowledge-data, models, and associated methods and tools integrated with design work-flows and data-flows for specific product-process design problems. In particular, the framework needs to manage models of different types......, forms and complexity, together with their associated parameters. An example of a model-based system for design of chemicals based formulated products is also given....

  12. Model Based Mission Assurance: Emerging Opportunities for Robotic Systems

    Science.gov (United States)

    Evans, John W.; DiVenti, Tony

    2016-01-01

    The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).

  13. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    Science.gov (United States)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  14. FastStats: Measles

    Science.gov (United States)

    ... Women’s Health State and Territorial Data Reproductive Health Contraceptive Use Infertility Reproductive Health Notice Regarding FastStats Mobile ... measles, mumps, rubella: 91.9% (2015) Percent of adolescents aged 13-17 years vaccinated against measles, mumps, ...

  15. Fast neutrons dosimetry

    International Nuclear Information System (INIS)

    Rzyski, B.M.

    1977-01-01

    A proton recoil technique has been developed for inducing thermoluminescence with incident fast neutrons. CaF 2 was used as the TL phosphor, and cane sugar and polyethylene were used as proton radiators. The phosphor and the hydrogeneous material powders were well mixed, encapsulated in glass tubes and exposed to Am-Be sources, resulting in recoils from incident fast neutrons of energy between 0,25 and 11,25 MeV. The intrinsic response of pure CaF 2 to fast neutrons without a hydrogeneous radiator was checked by using LiF (TLD-700). Glow curves were recorded from room temperature up to 350 0 C after different doses of neutrons and gamma rays of 60 Co. First collision dose due to fast neutrons in tissue like materials such as cane sugar and polyethylene was also calculated [pt

  16. Dounreay fast reactor

    International Nuclear Information System (INIS)

    Maclennan, R.; Eggar, T.; Skeet, T.

    1992-01-01

    The short debate which followed a private notice question asking for a statement on Government policy on the future of the European fast breeder nuclear research programme is reported verbatim. In response to the question, the Minister for Energy said that the Government had decided in 1988 that the Dounreay prototype fast reactor would close in 1994. That decision had been confirmed. Funding of fast breeder research and development beyond 1993 is not a priority as commercialization is not expected until well into the next century. Dounreay will be supported financially until 1994 and then for its subsequent decommissioning and reprocessing of spent fuel. The debate raised issues such as Britain losing its lead in fast breeder research, loss of jobs and the Government's nuclear policy in general. However, the Government's position was that the research had reached a stage where it could be left and returned to in the future. (UK)

  17. CMS Fast Facts

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS has developed a new quick reference statistical summary on annual CMS program and financial data. CMS Fast Facts includes summary information on total program...

  18. Brug af faste vendinger

    DEFF Research Database (Denmark)

    Bergenholtz, Henning; Bjærge, Esben

    Ordbogen indelholder tekstproduktionsangivelser til ca. 17.000 idiomer, ordsprog, bevingede ord og andre faste vendinger. Det drejer sig bl.a. om angivelser til betydningen, grammatik, kollokationer, eksempler, synonymer og antonymer....

  19. Detection of Internal Short Circuit in Lithium Ion Battery Using Model-Based Switching Model Method

    Directory of Open Access Journals (Sweden)

    Minhwan Seo

    2017-01-01

    Full Text Available Early detection of an internal short circuit (ISCr in a Li-ion battery can prevent it from undergoing thermal runaway, and thereby ensure battery safety. In this paper, a model-based switching model method (SMM is proposed to detect the ISCr in the Li-ion battery. The SMM updates the model of the Li-ion battery with ISCr to improve the accuracy of ISCr resistance R I S C f estimates. The open circuit voltage (OCV and the state of charge (SOC are estimated by applying the equivalent circuit model, and by using the recursive least squares algorithm and the relation between OCV and SOC. As a fault index, the R I S C f is estimated from the estimated OCVs and SOCs to detect the ISCr, and used to update the model; this process yields accurate estimates of OCV and R I S C f . Then the next R I S C f is estimated and used to update the model iteratively. Simulation data from a MATLAB/Simulink model and experimental data verify that this algorithm shows high accuracy of R I S C f estimates to detect the ISCr, thereby helping the battery management system to fulfill early detection of the ISCr.

  20. Identifiability study of the proteins degradation model, based on ADM1, using simultaneous batch experiments

    DEFF Research Database (Denmark)

    Flotats, X.; Palatsi, J.; Ahring, Birgitte Kiær

    2006-01-01

    are not inhibiting the hydrolysis process. The ADM1 model adequately expressed the consecutive steps of hydrolysis and acidogenesis, with estimated kinetic values corresponding to a fast acidogenesis and slower hydrolysis. The hydrolysis was found to be the rate limiting step of anaerobic degradation. Estimation...... of yield coefficients based on the relative initial slopes of VFA profiles obtained in a simple batch experiment produced satisfactory results. From the identification study, it was concluded that it is possible to determine univocally the related kinetic parameter values for protein degradation...... if the evolution of amino acids is measured in simultaneous batch experiments, with different initial protein and amino acids concentrations....

  1. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  2. Towards model-based testing of electronic funds transfer systems

    OpenAIRE

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.

    2010-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the transaction flows specified in the ISO 8583 standard in terms of a Labeled Transition System (LTS). This formalization paves the way for model-based testing based on the formal notion of Input-Outpu...

  3. Fasting and Urinary Stones

    Directory of Open Access Journals (Sweden)

    Ali Shamsa

    2013-11-01

    Full Text Available Introduction: Fasting is considered as one of the most important practices of Islam, and according to Prophet Mohammad, fasting is obligatory upon Muslims. The aim of this study is to evaluate the effects of fasting on urinary stones. Materials and Methods: Very few studies have been carried out on urinary stones and the effect of Ramadan fasting. The sources of the present study are Medline and articles presented by local and Muslim researchers. Meanwhile, since we are acquainted with three well-known researchers in the field of urology, we contacted them via email and asked for their professional opinions. Results: The results of studies about the relationship of urinary stones and their incidence in Ramadan are not alike, and are even sometimes contradictory. Some believe that increased incidence of urinary stones in Ramadan is related not to fasting, but to the rise of weather temperature in hot months, and an increase in humidity. Conclusion: Numerous biological and behavioral changes occur in people who fast in Ramadan and some researchers believe that urinary stone increases during this month.

  4. Estimation of Motion Vector Fields

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    1993-01-01

    This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....

  5. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  6. Fuzzy model-based adaptive synchronization of time-delayed chaotic systems

    International Nuclear Information System (INIS)

    Vasegh, Nastaran; Majd, Vahid Johari

    2009-01-01

    In this paper, fuzzy model-based synchronization of a class of first order chaotic systems described by delayed-differential equations is addressed. To design the fuzzy controller, the chaotic system is modeled by Takagi-Sugeno fuzzy system considering the properties of the nonlinear part of the system. Assuming that the parameters of the chaotic system are unknown, an adaptive law is derived to estimate these unknown parameters, and the stability of error dynamics is guaranteed by Lyapunov theory. Numerical examples are given to demonstrate the validity of the proposed adaptive synchronization approach.

  7. Model based on diffuse logic for the construction of indicators of urban vulnerability in natural phenomena

    International Nuclear Information System (INIS)

    Garcia L, Carlos Eduardo; Hurtado G, Jorge Eduardo

    2003-01-01

    Upon considering the vulnerability of a urban system in a holistic way and taking into account some natural, technological and social factors, a model based upon a system of fuzzy logic, allowing to estimate the vulnerability of any system under natural phenomena potentially catastrophic is proposed. The model incorporates quantitative and qualitative variables in a dynamic system, in which variations in one of them have a positive or negative impact over the rest. An urban system model and an indicator model to determine the vulnerability due to natural phenomena were designed

  8. A Probabilistic Short-Term Water Demand Forecasting Model Based on the Markov Chain

    Directory of Open Access Journals (Sweden)

    Francesca Gagliardi

    2017-07-01

    Full Text Available This paper proposes a short-term water demand forecasting method based on the use of the Markov chain. This method provides estimates of future demands by calculating probabilities that the future demand value will fall within pre-assigned intervals covering the expected total variability. More specifically, two models based on homogeneous and non-homogeneous Markov chains were developed and presented. These models, together with two benchmark models (based on artificial neural network and naïve methods, were applied to three real-life case studies for the purpose of forecasting the respective water demands from 1 to 24 h ahead. The results obtained show that the model based on a homogeneous Markov chain provides more accurate short-term forecasts than the one based on a non-homogeneous Markov chain, which is in line with the artificial neural network model. Both Markov chain models enable probabilistic information regarding the stochastic demand forecast to be easily obtained.

  9. Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.

    Science.gov (United States)

    Dayan, Peter; Berridge, Kent C

    2014-06-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.

  10. Non-frontal Model Based Approach to Forensic Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance

  11. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    DEFF Research Database (Denmark)

    H. Hjort, Ulrik; Rasmussen, Jacob Illum; Larsen, Kim Guldstrand

    2009-01-01

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates...

  12. Automated model-based testing of hybrid systems

    NARCIS (Netherlands)

    Osch, van M.P.W.J.

    2009-01-01

    In automated model-based input-output conformance testing, tests are automati- cally generated from a speci¯cation and automatically executed on an implemen- tation. Input is applied to the implementation and output is observed from the implementation. If the observed output is allowed according to

  13. Model Based Fault Detection in a Centrifugal Pump Application

    DEFF Research Database (Denmark)

    Kallesøe, Carsten; Cocquempot, Vincent; Izadi-Zamanabadi, Roozbeh

    2006-01-01

    A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural considerations...

  14. Perceptual decision neurosciences: a model-based review

    NARCIS (Netherlands)

    Mulder, M.J.; van Maanen, L.; Forstmann, B.U.

    2014-01-01

    In this review we summarize findings published over the past 10 years focusing on the neural correlates of perceptual decision-making. Importantly, this review highlights only studies that employ a model-based approach, i.e., they use quantitative cognitive models in combination with neuroscientific

  15. A comparative study of independent particle model based ...

    Indian Academy of Sciences (India)

    We find that among these three independent particle model based methods, the ss-VSCF method provides most accurate results in the thermal averages followed by t-SCF and the v-VSCF is the least accurate. However, the ss-VSCF is found to be computationally very expensive for the large molecules. The t-SCF gives ...

  16. Model-based safety architecture framework for complex systems

    NARCIS (Netherlands)

    Schuitemaker, Katja; Rajabali Nejad, Mohammadreza; Braakhuis, J.G.; Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico; Kröger, Wolfgang

    2015-01-01

    The shift to transparency and rising need of the general public for safety, together with the increasing complexity and interdisciplinarity of modern safety-critical Systems of Systems (SoS) have resulted in a Model-Based Safety Architecture Framework (MBSAF) for capturing and sharing architectural

  17. Adopting a Models-Based Approach to Teaching Physical Education

    Science.gov (United States)

    Casey, Ashley; MacPhail, Ann

    2018-01-01

    Background: The popularised notion of models-based practice (MBP) is one that focuses on the delivery of a model, e.g. Cooperative Learning, Sport Education, Teaching Personal and Social Responsibility, Teaching Games for Understanding. Indeed, while an abundance of research studies have examined the delivery of a single model and some have…

  18. Model-based analysis and simulation of regenerative heat wheel

    DEFF Research Database (Denmark)

    Wu, Zhuang; Melnik, Roderick V. N.; Borup, F.

    2006-01-01

    The rotary regenerator (also called the heat wheel) is an important component of energy intensive sectors, which is used in many heat recovery systems. In this paper, a model-based analysis of a rotary regenerator is carried out with a major emphasis given to the development and implementation of...

  19. Towards model-based testing of electronic funds transfer systems

    NARCIS (Netherlands)

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.; Arbab, F.; Sirjani, M.

    2012-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the

  20. Towards model-based testing of electronic funds transfer systems

    NARCIS (Netherlands)

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.

    2010-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the