WorldWideScience

Sample records for moving average-based estimators

  1. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...

  2. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  3. Estimation and Forecasting in Vector Autoregressive Moving Average Models for Rich Datasets

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Kapetanios, George

    We address the issue of modelling and forecasting macroeconomic variables using rich datasets, by adopting the class of Vector Autoregressive Moving Average (VARMA) models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares (...

  4. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  5. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  6. Estimation of direction of arrival of a moving target using subspace based approaches

    Science.gov (United States)

    Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2016-05-01

    In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.

  7. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  8. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  9. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  10. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  11. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  12. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    Science.gov (United States)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  13. Power Based Phase-Locked Loop Under Adverse Conditions with Moving Average Filter for Single-Phase System

    Directory of Open Access Journals (Sweden)

    Menxi Xie

    2017-06-01

    Full Text Available High performance synchronization methord is citical for grid connected power converter. For single-phase system, power based phase-locked loop(pPLL uses a multiplier as phase detector(PD. As single-phase grid voltage is distorted, the phase error information contains ac disturbances oscillating at integer multiples of fundamental frequency which lead to detection error. This paper presents a new scheme based on moving average filter(MAF applied in-loop of pPLL. The signal characteristic of phase error is dissussed in detail. A predictive rule is adopted to compensate the delay induced by MAF, thus achieving fast dynamic response. In the case of frequency deviate from nomimal, estimated frequency is fed back to adjust the filter window length of MAF and buffer size of predictive rule. Simulation and experimental results show that proposed PLL achieves good performance under adverse grid conditions.

  14. Relationship research between meteorological disasters and stock markets based on a multifractal detrending moving average algorithm

    Science.gov (United States)

    Li, Qingchen; Cao, Guangxi; Xu, Wei

    2018-01-01

    Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.

  15. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Directory of Open Access Journals (Sweden)

    Jacinta Chan Phooi M'ng

    Full Text Available The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR, Malaysia (MYR, the Philippines (PHP, Singapore (SGD, and Thailand (THB through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA' in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  16. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Science.gov (United States)

    Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  17. Quantifying walking and standing behaviour of dairy cows using a moving average based on output from an accelerometer

    DEFF Research Database (Denmark)

    Nielsen, Lars Relund; Pedersen, Asger Roer; Herskin, Mette S

    2010-01-01

    in sequences of approximately 20 s for the period of 10 min. Afterwards the cows were stimulated to move/lift the legs while standing in a cubicle. The behaviour was video recorded, and the recordings were analysed second by second for walking and standing behaviour as well as the number of steps taken....... Various algorithms for predicting walking/standing status were compared. The algorithms were all based on a limit of a moving average calculated by using one of two outputs of the accelerometer, either a motion index or a step count, and applied over periods of 3 or 5 s. Furthermore, we investigated...... the effect of additionally applying the rule: a walking period must last at least 5 s. The results indicate that the lowest misclassification rate (10%) of walking and standing was obtained based on the step count with a moving average of 3 s and with the rule applied. However, the rate of misclassification...

  18. Moving Average Filter-Based Phase-Locked Loops: Performance Analysis and Design Guidelines

    DEFF Research Database (Denmark)

    Golestan, Saeed; Ramezani, Malek; Guerrero, Josep M.

    2014-01-01

    this challenge, incorporating moving average filter(s) (MAF) into the PLL structure has been proposed in some recent literature. A MAF is a linear-phase finite impulse response filter which can act as an ideal low-pass filter, if certain conditions hold. The main aim of this paper is to present the control...... design guidelines for a typical MAF-based PLL. The paper starts with the general description of MAFs. The main challenge associated with using the MAFs is then explained, and its possible solutions are discussed. The paper then proceeds with a brief overview of the different MAF-based PLLs. In each case......, the PLL block diagram description is shown, the advantages and limitations are briefly discussed, and the tuning approach (if available) is evaluated. The paper then presents two systematic methods to design the control parameters of a typical MAF-based PLL: one for the case of using a proportional...

  19. A Two-Factor Autoregressive Moving Average Model Based on Fuzzy Fluctuation Logical Relationships

    Directory of Open Access Journals (Sweden)

    Shuang Guan

    2017-10-01

    Full Text Available Many of the existing autoregressive moving average (ARMA forecast models are based on one main factor. In this paper, we proposed a new two-factor first-order ARMA forecast model based on fuzzy fluctuation logical relationships of both a main factor and a secondary factor of a historical training time series. Firstly, we generated a fluctuation time series (FTS for two factors by calculating the difference of each data point with its previous day, then finding the absolute means of the two FTSs. We then constructed a fuzzy fluctuation time series (FFTS according to the defined linguistic sets. The next step was establishing fuzzy fluctuation logical relation groups (FFLRGs for a two-factor first-order autoregressive (AR(1 model and forecasting the training data with the AR(1 model. Then we built FFLRGs for a two-factor first-order autoregressive moving average (ARMA(1,m model. Lastly, we forecasted test data with the ARMA(1,m model. To illustrate the performance of our model, we used real Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX and Dow Jones datasets as a secondary factor to forecast TAIEX. The experiment results indicate that the proposed two-factor fluctuation ARMA method outperformed the one-factor method based on real historic data. The secondary factor may have some effects on the main factor and thereby impact the forecasting results. Using fuzzified fluctuations rather than fuzzified real data could avoid the influence of extreme values in historic data, which performs negatively while forecasting. To verify the accuracy and effectiveness of the model, we also employed our method to forecast the Shanghai Stock Exchange Composite Index (SHSECI from 2001 to 2015 and the international gold price from 2000 to 2010.

  20. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  1. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  2. A RED modified weighted moving average for soft real-time application

    Directory of Open Access Journals (Sweden)

    Domanśka Joanna

    2014-09-01

    Full Text Available The popularity of TCP/IP has resulted in an increase in usage of best-effort networks for real-time communication. Much effort has been spent to ensure quality of service for soft real-time traffic over IP networks. The Internet Engineering Task Force has proposed some architecture components, such as Active Queue Management (AQM. The paper investigates the influence of the weighted moving average on packet waiting time reduction for an AQM mechanism: the RED algorithm. The proposed method for computing the average queue length is based on a difference equation (a recursive equation. Depending on a particular optimality criterion, proper parameters of the modified weighted moving average function can be chosen. This change will allow reducing the number of violations of timing constraints and better use of this mechanism for soft real-time transmissions. The optimization problem is solved through simulations performed in OMNeT++ and later verified experimentally on a Linux implementation

  3. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  4. MOTION ARTIFACT REDUCTION IN FUNCTIONAL NEAR INFRARED SPECTROSCOPY SIGNALS BY AUTOREGRESSIVE MOVING AVERAGE MODELING BASED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    MEHDI AMIAN

    2013-10-01

    Full Text Available Functional near infrared spectroscopy (fNIRS is a technique that is used for noninvasive measurement of the oxyhemoglobin (HbO2 and deoxyhemoglobin (HHb concentrations in the brain tissue. Since the ratio of the concentration of these two agents is correlated with the neuronal activity, fNIRS can be used for the monitoring and quantifying the cortical activity. The portability of fNIRS makes it a good candidate for studies involving subject's movement. The fNIRS measurements, however, are sensitive to artifacts generated by subject's head motion. This makes fNIRS signals less effective in such applications. In this paper, the autoregressive moving average (ARMA modeling of the fNIRS signal is proposed for state-space representation of the signal which is then fed to the Kalman filter for estimating the motionless signal from motion corrupted signal. Results are compared to the autoregressive model (AR based approach, which has been done previously, and show that the ARMA models outperform AR models. We attribute it to the richer structure, containing more terms indeed, of ARMA than AR. We show that the signal to noise ratio (SNR is about 2 dB higher for ARMA based method.

  5. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  6. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  7. Short-term electricity prices forecasting based on support vector regression and Auto-regressive integrated moving average modeling

    International Nuclear Information System (INIS)

    Che Jinxing; Wang Jianzhou

    2010-01-01

    In this paper, we present the use of different mathematical models to forecast electricity price under deregulated power. A successful prediction tool of electricity price can help both power producers and consumers plan their bidding strategies. Inspired by that the support vector regression (SVR) model, with the ε-insensitive loss function, admits of the residual within the boundary values of ε-tube, we propose a hybrid model that combines both SVR and Auto-regressive integrated moving average (ARIMA) models to take advantage of the unique strength of SVR and ARIMA models in nonlinear and linear modeling, which is called SVRARIMA. A nonlinear analysis of the time-series indicates the convenience of nonlinear modeling, the SVR is applied to capture the nonlinear patterns. ARIMA models have been successfully applied in solving the residuals regression estimation problems. The experimental results demonstrate that the model proposed outperforms the existing neural-network approaches, the traditional ARIMA models and other hybrid models based on the root mean square error and mean absolute percentage error.

  8. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  9. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  10. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  11. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  12. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  13. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  14. Electricity demand loads modeling using AutoRegressive Moving Average (ARMA) models

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.S. [Department of Information and Communication Systems Engineering, University of the Aegean, Karlovassi, 83 200 Samos (Greece); Ekonomou, L.; Chatzarakis, G.E. [Department of Electrical Engineering Educators, ASPETE - School of Pedagogical and Technological Education, N. Heraklion, 141 21 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24100 Kalamata (Greece); Katsikas, S.K. [Department of Technology Education and Digital Systems, University of Piraeus, 150 Androutsou Srt., 18 532 Piraeus (Greece); Liatsis, P. [Division of Electrical Electronic and Information Engineering, School of Engineering and Mathematical Sciences, Information and Biomedical Engineering Centre, City University, Northampton Square, London EC1V 0HB (United Kingdom)

    2008-09-15

    This study addresses the problem of modeling the electricity demand loads in Greece. The provided actual load data is deseasonilized and an AutoRegressive Moving Average (ARMA) model is fitted on the data off-line, using the Akaike Corrected Information Criterion (AICC). The developed model fits the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on-line/adaptive modeling is required. In both cases and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise are performed. The produced results indicate that the proposed method, which is based on the multi-model partitioning theory, tackles successfully the studied problem. For validation purposes the produced results are compared with three other established order selection criteria, namely AICC, Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The developed model could be useful in the studies that concern electricity consumption and electricity prices forecasts. (author)

  15. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  16. MARD—A moving average rose diagram application for the geosciences

    Science.gov (United States)

    Munro, Mark A.; Blenkinsop, Thomas G.

    2012-12-01

    MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.

  17. Moving average rules as a source of market instability

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  18. On the speed towards the mean for continuous time autoregressive moving average processes with applications to energy markets

    International Nuclear Information System (INIS)

    Benth, Fred Espen; Taib, Che Mohd Imran Che

    2013-01-01

    We extend the concept of half life of an Ornstein–Uhlenbeck process to Lévy-driven continuous-time autoregressive moving average processes with stochastic volatility. The half life becomes state dependent, and we analyze its properties in terms of the characteristics of the process. An empirical example based on daily temperatures observed in Petaling Jaya, Malaysia, is presented, where the proposed model is estimated and the distribution of the half life is simulated. The stationarity of the dynamics yield futures prices which asymptotically tend to constant at an exponential rate when time to maturity goes to infinity. The rate is characterized by the eigenvalues of the dynamics. An alternative description of this convergence can be given in terms of our concept of half life. - Highlights: • The concept of half life is extended to Levy-driven continuous time autoregressive moving average processes • The dynamics of Malaysian temperatures are modeled using a continuous time autoregressive model with stochastic volatility • Forward prices on temperature become constant when time to maturity tends to infinity • Convergence in time to maturity is at an exponential rate given by the eigenvalues of the model temperature model

  19. Moving-Target Position Estimation Using GPU-Based Particle Filter for IoT Sensing Applications

    Directory of Open Access Journals (Sweden)

    Seongseop Kim

    2017-11-01

    Full Text Available A particle filter (PF has been introduced for effective position estimation of moving targets for non-Gaussian and nonlinear systems. The time difference of arrival (TDOA method using acoustic sensor array has normally been used to for estimation by concealing the location of a moving target, especially underwater. In this paper, we propose a GPU -based acceleration of target position estimation using a PF and propose an efficient system and software architecture. The proposed graphic processing unit (GPU-based algorithm has more advantages in applying PF signal processing to a target system, which consists of large-scale Internet of Things (IoT-driven sensors because of the parallelization which is scalable. For the TDOA measurement from the acoustic sensor array, we use the generalized cross correlation phase transform (GCC-PHAT method to obtain the correlation coefficient of the signal using Fast Fourier Transform (FFT, and we try to accelerate the calculations of GCC-PHAT based TDOA measurements using FFT with GPU compute unified device architecture (CUDA. The proposed approach utilizes a parallelization method in the target position estimation algorithm using GPU-based PF processing. In addition, it could efficiently estimate sudden movement change of the target using GPU-based parallel computing which also can be used for multiple target tracking. It also provides scalability in extending the detection algorithm according to the increase of the number of sensors. Therefore, the proposed architecture can be applied in IoT sensing applications with a large number of sensors. The target estimation algorithm was verified using MATLAB and implemented using GPU CUDA. We implemented the proposed signal processing acceleration system using target GPU to analyze in terms of execution time. The execution time of the algorithm is reduced by 55% from to the CPU standalone operation in target embedded board, NVIDIA Jetson TX1. Also, to apply large

  20. Monthly streamflow forecasting with auto-regressive integrated moving average

    Science.gov (United States)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  1. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

    2009-08-15

    The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

  2. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    Science.gov (United States)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  3. PERAMALAN PERSEDIAAN INFUS MENGGUNAKAN METODE AUTOREGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA) PADA RUMAH SAKIT UMUM PUSAT SANGLAH

    OpenAIRE

    I PUTU YUDI PRABHADIKA; NI KETUT TARI TASTRAWATI; LUH PUTU IDA HARINI

    2018-01-01

    Infusion supplies are an important thing that must be considered by the hospital in meeting the needs of patients. This study aims to predict the need for infusion of 0.9% 500 ml of NaCl and 5% 500 ml glucose infusion at Sanglah General Hospital (RSUP) Sanglah so that the hospital can estimate the many infusions needed for the next six months. The forecasting method used in this research is the autoregressive integrated moving average (ARIMA) time series method. The results of this study indi...

  4. Autoregressive-moving-average hidden Markov model for vision-based fall prediction-An application for walker robot.

    Science.gov (United States)

    Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro

    2017-01-01

    Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.

  5. Statistical aspects of autoregressive-moving average models in the assessment of radon mitigation

    International Nuclear Information System (INIS)

    Dunn, J.E.; Henschel, D.B.

    1989-01-01

    Radon values, as reflected by hourly scintillation counts, seem dominated by major, pseudo-periodic, random fluctuations. This methodological paper reports a moderate degree of success in modeling these data using relatively simple autoregressive-moving average models to assess the effectiveness of radon mitigation techniques in existing housing. While accounting for the natural correlation of successive observations, familiar summary statistics such as steady state estimates, standard errors, confidence limits, and tests of hypothesis are produced. The Box-Jenkins approach is used throughout. In particular, intervention analysis provides an objective means of assessing the effectiveness of an active mitigation measure, such as a fan off/on cycle. Occasionally, failure to declare a significant intervention has suggested a means of remedial action in the data collection procedure

  6. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    Science.gov (United States)

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  7. A generalization of the preset count moving average algorithm for digital rate meters

    International Nuclear Information System (INIS)

    Arandjelovic, Vojislav; Koturovic, Aleksandar; Vukanovic, Radomir

    2002-01-01

    A generalized definition of the preset count moving average algorithm for digital rate meters has been introduced. The algorithm is based on the knowledge of time intervals between successive pulses in random-pulse sequences. The steady state and transient regimes of the algorithm have been characterized. A measure for statistical fluctuations of the successive measurement results has been introduced. The versatility of the generalized algorithm makes it suitable for application in the design of the software of modern measuring/control digital systems

  8. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  9. Forecasting Rice Productivity and Production of Odisha, India, Using Autoregressive Integrated Moving Average Models

    Directory of Open Access Journals (Sweden)

    Rahul Tripathi

    2014-01-01

    Full Text Available Forecasting of rice area, production, and productivity of Odisha was made from the historical data of 1950-51 to 2008-09 by using univariate autoregressive integrated moving average (ARIMA models and was compared with the forecasted all Indian data. The autoregressive (p and moving average (q parameters were identified based on the significant spikes in the plots of partial autocorrelation function (PACF and autocorrelation function (ACF of the different time series. ARIMA (2, 1, 0 model was found suitable for all Indian rice productivity and production, whereas ARIMA (1, 1, 1 was best fitted for forecasting of rice productivity and production in Odisha. Prediction was made for the immediate next three years, that is, 2007-08, 2008-09, and 2009-10, using the best fitted ARIMA models based on minimum value of the selection criterion, that is, Akaike information criteria (AIC and Schwarz-Bayesian information criteria (SBC. The performances of models were validated by comparing with percentage deviation from the actual values and mean absolute percent error (MAPE, which was found to be 0.61 and 2.99% for the area under rice in Odisha and India, respectively. Similarly for prediction of rice production and productivity in Odisha and India, the MAPE was found to be less than 6%.

  10. Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data

    Science.gov (United States)

    Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti

    2018-03-01

    In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.

  11. Dual-component model of respiratory motion based on the periodic autoregressive moving average (periodic ARMA) method

    International Nuclear Information System (INIS)

    McCall, K C; Jeraj, R

    2007-01-01

    A new approach to the problem of modelling and predicting respiration motion has been implemented. This is a dual-component model, which describes the respiration motion as a non-periodic time series superimposed onto a periodic waveform. A periodic autoregressive moving average algorithm has been used to define a mathematical model of the periodic and non-periodic components of the respiration motion. The periodic components of the motion were found by projecting multiple inhale-exhale cycles onto a common subspace. The component of the respiration signal that is left after removing this periodicity is a partially autocorrelated time series and was modelled as an autoregressive moving average (ARMA) process. The accuracy of the periodic ARMA model with respect to fluctuation in amplitude and variation in length of cycles has been assessed. A respiration phantom was developed to simulate the inter-cycle variations seen in free-breathing and coached respiration patterns. At ±14% variability in cycle length and maximum amplitude of motion, the prediction errors were 4.8% of the total motion extent for a 0.5 s ahead prediction, and 9.4% at 1.0 s lag. The prediction errors increased to 11.6% at 0.5 s and 21.6% at 1.0 s when the respiration pattern had ±34% variations in both these parameters. Our results have shown that the accuracy of the periodic ARMA model is more strongly dependent on the variations in cycle length than the amplitude of the respiration cycles

  12. Using exponentially weighted moving average algorithm to defend against DDoS attacks

    CSIR Research Space (South Africa)

    Machaka, P

    2016-11-01

    Full Text Available This paper seeks to investigate the performance of the Exponentially Weighted Moving Average (EWMA) for mining big data and detection of DDoS attacks in Internet of Things (IoT) infrastructure. The paper will investigate the tradeoff between...

  13. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  14. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    Directory of Open Access Journals (Sweden)

    Samir Khaled Safi

    2014-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  15. An Exponentially Weighted Moving Average Control Chart for Bernoulli Data

    DEFF Research Database (Denmark)

    Spliid, Henrik

    2010-01-01

    of the transformation is given and its limit for small values of p is derived. Control of high yield processes is discussed and the chart is shown to perform very well in comparison with both the most common alternative EWMA chart and the CUSUM chart. The construction and the use of the proposed EWMA chart......We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function...

  16. An ML-Based Radial Velocity Estimation Algorithm for Moving Targets in Spaceborne High-Resolution and Wide-Swath SAR Systems

    Directory of Open Access Journals (Sweden)

    Tingting Jin

    2017-04-01

    Full Text Available Multichannel synthetic aperture radar (SAR is a significant breakthrough to the inherent limitation between high-resolution and wide-swath (HRWS compared with conventional SAR. Moving target indication (MTI is an important application of spaceborne HRWS SAR systems. In contrast to previous studies of SAR MTI, the HRWS SAR mainly faces the problem of under-sampled data of each channel, causing single-channel imaging and processing to be infeasible. In this study, the estimation of velocity is equivalent to the estimation of the cone angle according to their relationship. The maximum likelihood (ML based algorithm is proposed to estimate the radial velocity in the existence of Doppler ambiguities. After that, the signal reconstruction and compensation for the phase offset caused by radial velocity are processed for a moving target. Finally, the traditional imaging algorithm is applied to obtain a focused moving target image. Experiments are conducted to evaluate the accuracy and effectiveness of the estimator under different signal-to-noise ratios (SNR. Furthermore, the performance is analyzed with respect to the motion ship that experiences interference due to different distributions of sea clutter. The results verify that the proposed algorithm is accurate and efficient with low computational complexity. This paper aims at providing a solution to the velocity estimation problem in the future HRWS SAR systems with multiple receive channels.

  17. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  18. Middle and long-term prediction of UT1-UTC based on combination of Gray Model and Autoregressive Integrated Moving Average

    Science.gov (United States)

    Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing

    2017-02-01

    UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.

  19. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  20. Autoregressive moving average fitting for real standard deviation in Monte Carlo power distribution calculation

    International Nuclear Information System (INIS)

    Ueki, Taro

    2010-01-01

    The noise propagation of tallies in the Monte Carlo power method can be represented by the autoregressive moving average process of orders p and p-1 (ARMA(p,p-1)], where p is an integer larger than or equal to two. The formula of the autocorrelation of ARMA(p,q), p≥q+1, indicates that ARMA(3,2) fitting is equivalent to lumping the eigenmodes of fluctuation propagation in three modes such as the slow, intermediate and fast attenuation modes. Therefore, ARMA(3,2) fitting was applied to the real standard deviation estimation of fuel assemblies at particular heights. The numerical results show that straightforward ARMA(3,2) fitting is promising but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method in MCNP with a batch size larger than one hundred and smaller than two hundred cycles for a 1100 MWe pressurized water reactor. The bias correction of low lag autocovariances in MVP/GMVP is demonstrated to have the potential of improving the average performance of ARMA(3,2) fitting. (author)

  1. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  2. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.

    2017-08-22

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  3. A new approach on seismic mortality estimations based on average population density

    Science.gov (United States)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  4. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    OpenAIRE

    Samir Khaled Safi

    2014-01-01

    The autocorrelation function (ACF) measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...

  5. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul [Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, Korea 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Department of Radiation Oncology, Asan Medical Center, Seoul, 138-736 (Korea, Republic of); Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Radiation Physics Laboratory, Sydney Medical School, University of Sydney, 2006 (Australia)

    2011-07-15

    Purpose: In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. Methods: The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a {gamma}-test with a 3%/3 mm criterion. Results: The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the {gamma}-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation

  6. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    Science.gov (United States)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of

  7. Exponentially Weighted Moving Average Chart as a Suitable Tool for Nuchal Translucency Quality Review

    Czech Academy of Sciences Publication Activity Database

    Hynek, M.; Smetanová, D.; Stejskal, D.; Zvárová, Jana

    2014-01-01

    Roč. 34, č. 4 (2014), s. 367-376 ISSN 0197-3851 Institutional support: RVO:67985807 Keywords : nuchal translucency * exponentially weighted moving average model * statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.268, year: 2014

  8. Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.

    Science.gov (United States)

    Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing

    2017-10-11

    In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.

  9. Moving Horizon Control and Estimation of Livestock Ventilation Systems and Indoor Climate

    DEFF Research Database (Denmark)

    Wu, Z.; Stoustrup, Jakob; Jørgensen, John Bagterp

    2008-01-01

    In this paper, a new control strategy involves exploiting actuator redundancy in a multivariable system is developed for rejecting the covariance of the fast frequency disturbances and pursuing optimum energy solution. This strategy enhances the resilience of the control system to disturbances...... beyond its bandwidth and reduce energy consumption through on-line optimization computation. The moving horizon estimation and control (also called predictive control) technology is applied and simulated. The design is based on a coupled mathematical model which combines the hybrid ventilation system...... and the associated indoor climate for poultry in barns. The comparative simulation results illustrate the significant potential and advancement of the moving horizon methodologies in estimation and control for nonlinear Multiple Input and Multiple Output system with unknown noise covariance and actuator saturation....

  10. Autoregressive moving average (ARMA) model applied to quantification of cerebral blood flow using dynamic susceptibility contrast-enhanced magnetic resonance imaging

    International Nuclear Information System (INIS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki

    2003-01-01

    The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)

  11. Experimental Verification of a Vehicle Localization based on Moving Horizon Estimation Integrating LRS and Odometry

    International Nuclear Information System (INIS)

    Sakaeta, Kuniyuki; Nonaka, Kenichiro; Sekiguchi, Kazuma

    2016-01-01

    Localization is an important function for the robots to complete various tasks. For localization, both internal and external sensors are used generally. The odometry is widely used as the method based on the internal sensors, but it suffers from cumulative errors. In the method using the laser range sensor (LRS) which is a kind of external sensor, the estimation accuracy is affected by the number of available measurement data. In our previous study, we applied moving horizon estimation (MHE) to the vehicle localization for integrating the LRS measurement data and the odometry information where the weightings of them are balanced relatively adapting to the number of the available LRS measurement data. In this paper, the effectiveness of the proposed localization method is verified through both numerical simulations and experiments using a 1/10 scale vehicle. The verification is conducted in the situations where the vehicle position cannot be localized uniquely on a certain direction using the LRS measurement data only. We achieve accurate localization even in such a situation by integrating the odometry and LRS based on MHE. We also show the superiority of the method through comparisons with a method using extended Kalman filter (EKF). (paper)

  12. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    Hunter, N.F. Jr.

    1990-01-01

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  13. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    International Nuclear Information System (INIS)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-01-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval

  14. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    Science.gov (United States)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-09-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  15. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    Energy Technology Data Exchange (ETDEWEB)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan [Department of Civil and Structural Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)

    2014-09-12

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  16. Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation

    Science.gov (United States)

    Ekin Aydin, Boran; Rutten, Martine

    2016-04-01

    Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.

  17. The Grid Method in Estimating the Path Length of a Moving Animal

    NARCIS (Netherlands)

    Reddingius, J.; Schilstra, A.J.; Thomas, G.

    1983-01-01

    (1) The length of a path covered by a moving animal may be estimated by counting the number of times the animal crosses any line of a grid and applying a conversion factor. (2) Some factors are based on the expected distance through a randomly crossed square; another on the expected crossings of a

  18. Dosimetric consequences of planning lung treatments on 4DCT average reconstruction to represent a moving tumour

    International Nuclear Information System (INIS)

    Dunn, L.F.; Taylor, M.L.; Kron, T.; Franich, R.

    2010-01-01

    Full text: Anatomic motion during a radiotherapy treatment is one of the more significant challenges in contemporary radiation therapy. For tumours of the lung, motion due to patient respiration makes both accurate planning and dose delivery difficult. One approach is to use the maximum intensity projection (MIP) obtained from a 40 computed tomography (CT) scan and then use this to determine the treatment volume. The treatment is then planned on a 4DCT average reco struction, rather than assuming the entire ITY has a uniform tumour density. This raises the question: how well does planning on a 'blurred' distribution of density with CT values greater than lung density but less than tumour density match the true case of a tumour moving within lung tissue? The aim of this study was to answer this question, determining the dosimetric impact of using a 4D-CT average reconstruction as the basis for a radiotherapy treatment plan. To achieve this, Monte-Carlo sim ulations were undertaken using GEANT4. The geometry consisted of a tumour (diameter 30 mm) moving with a sinusoidal pattern of amplitude = 20 mm. The tumour's excursion occurs within a lung equivalent volume beyond a chest wall interface. Motion was defined parallel to a 6 MY beam. This was then compared to a single oblate tumour of a magnitude determined by the extremes of the tumour motion. The variable density of the 4DCT average tumour is simulated by a time-weighted average, to achieve the observed density gradient. The generic moving tumour geometry is illustrated in the Figure.

  19. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    Science.gov (United States)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  20. Estimating glomerular filtration rate (GFR) in children. The average between a cystatin C- and a creatinine-based equation improves estimation of GFR in both children and adults and enables diagnosing Shrunken Pore Syndrome.

    Science.gov (United States)

    Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders

    2017-09-01

    Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.

  1. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  2. Estimation of the monthly average daily solar radiation using geographic information system and advanced case-based reasoning.

    Science.gov (United States)

    Koo, Choongwan; Hong, Taehoon; Lee, Minhyun; Park, Hyo Seon

    2013-05-07

    The photovoltaic (PV) system is considered an unlimited source of clean energy, whose amount of electricity generation changes according to the monthly average daily solar radiation (MADSR). It is revealed that the MADSR distribution in South Korea has very diverse patterns due to the country's climatic and geographical characteristics. This study aimed to develop a MADSR estimation model for the location without the measured MADSR data, using an advanced case based reasoning (CBR) model, which is a hybrid methodology combining CBR with artificial neural network, multiregression analysis, and genetic algorithm. The average prediction accuracy of the advanced CBR model was very high at 95.69%, and the standard deviation of the prediction accuracy was 3.67%, showing a significant improvement in prediction accuracy and consistency. A case study was conducted to verify the proposed model. The proposed model could be useful for owner or construction manager in charge of determining whether or not to introduce the PV system and where to install it. Also, it would benefit contractors in a competitive bidding process to accurately estimate the electricity generation of the PV system in advance and to conduct an economic and environmental feasibility study from the life cycle perspective.

  3. Forecast of sea surface temperature off the Peruvian coast using an autoregressive integrated moving average model

    Directory of Open Access Journals (Sweden)

    Carlos Quispe

    2013-04-01

    Full Text Available El Niño connects globally climate, ecosystems and socio-economic activities. Since 1980 this event has been tried to be predicted, but until now the statistical and dynamical models are insuffi cient. Thus, the objective of the present work was to explore using an autoregressive moving average model the effect of El Niño over the sea surface temperature (TSM off the Peruvian coast. The work involved 5 stages: identifi cation, estimation, diagnostic checking, forecasting and validation. Simple and partial autocorrelation functions (FAC and FACP were used to identify and reformulate the orders of the model parameters, as well as Akaike information criterium (AIC and Schwarz criterium (SC for the selection of the best models during the diagnostic checking. Among the main results the models ARIMA(12,0,11 were proposed, which simulated monthly conditions in agreement with the observed conditions off the Peruvian coast: cold conditions at the end of 2004, and neutral conditions at the beginning of 2005.

  4. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  5. Applying Moving Objects Patterns towards Estimating Future Stocks Direction

    Directory of Open Access Journals (Sweden)

    Galal Dahab

    2016-01-01

    Full Text Available Stock is gaining vast popularity as a strategic investment tool not just by investor bankers, but also by the average worker. Large capitals are being traded within the stock market all around the world, making its impact not only macro economically focused, but also greatly valued taking into consideration its direct social impact. As a result, almost 66% of all American citizens are striving in their respective fields every day, trying to come up with better ways to predict and find patterns in stocks that could enhance their estimation and visualization so as to have the opportunity to take better investment decisions. Given the amount of effort that has been put into enhancing stock prediction techniques, there is still a factor that is almost completely neglected when handling stocks. The factor that has been obsolete for so long is in fact the effect of a correlation existing between stocks of the same index or parent company. This paper proposes a distinct approach for studying the correlation between stocks that belong to the same index by modelling stocks as moving objects to be able to track their movements while considering their relationships. Furthermore, it studies one of the movement techniques applied to moving objects to predict stock movement. The results yielded that both the movement technique and correlation coefficient technique are consistent in directions, with minor variations in values. The variations are attributed to the fact that the movement technique takes into consideration the sibling relationship

  6. Reduced complexity FFT-based DOA and DOD estimation for moving target in bistatic MIMO radar

    KAUST Repository

    Ali, Hussain

    2016-06-24

    In this paper, we consider a bistatic multiple-input multiple-output (MIMO) radar. We propose a reduced complexity algorithm to estimate the direction-of-arrival (DOA) and direction-of-departure (DOD) for moving target. We show that the calculation of parameter estimation can be expressed in terms of one-dimensional fast-Fourier-transforms which drastically reduces the complexity of the optimization algorithm. The performance of the proposed algorithm is compared with the two-dimension multiple signal classification (2D-MUSIC) and reduced-dimension MUSIC (RD-MUSIC) algorithms. It is shown by simulations, our proposed algorithm has better estimation performance and lower computational complexity compared to the 2D-MUSIC and RD-MUSIC algorithms. Moreover, simulation results also show that the proposed algorithm achieves the Cramer-Rao lower bound. © 2016 IEEE.

  7. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  8. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  9. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  10. Estimating the average treatment effect on survival based on observational data and using partly conditional modeling.

    Science.gov (United States)

    Gong, Qi; Schaubel, Douglas E

    2017-03-01

    Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.

  11. Real-time moving horizon estimation for a vibrating active cantilever

    Science.gov (United States)

    Abdollahpouri, Mohammad; Takács, Gergely; Rohaľ-Ilkiv, Boris

    2017-03-01

    Vibrating structures may be subject to changes throughout their operating lifetime due to a range of environmental and technical factors. These variations can be considered as parameter changes in the dynamic model of the structure, while their online estimates can be utilized in adaptive control strategies, or in structural health monitoring. This paper implements the moving horizon estimation (MHE) algorithm on a low-cost embedded computing device that is jointly observing the dynamic states and parameter variations of an active cantilever beam in real time. The practical behavior of this algorithm has been investigated in various experimental scenarios. It has been found, that for the given field of application, moving horizon estimation converges faster than the extended Kalman filter; moreover, it handles atypical measurement noise, sensor errors or other extreme changes, reliably. Despite its improved performance, the experiments demonstrate that the disadvantage of solving the nonlinear optimization problem in MHE is that it naturally leads to an increase in computational effort.

  12. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    Energy Technology Data Exchange (ETDEWEB)

    Verdin, Kristine L. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from the EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.

  13. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    Science.gov (United States)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  14. Statistical properties of the anomalous scaling exponent estimator based on time-averaged mean-square displacement

    Science.gov (United States)

    Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis

    2017-08-01

    The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.

  15. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  16. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  17. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  18. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    Science.gov (United States)

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  19. Low Complexity Moving Target Parameter Estimation for MIMO Radar using 2D-FFT

    KAUST Repository

    Jardak, Seifallah

    2017-06-16

    In multiple-input multiple-output radar, to localize a target and estimate its reflection coefficient, a given cost function is usually optimized over a grid of points. The performance of such algorithms is directly affected by the grid resolution. Increasing the number of grid points enhances the resolution of the estimator but also increases its computational complexity exponentially. In this work, two reduced complexity algorithms are derived based on Capon and amplitude and phase estimation (APES) to estimate the reflection coefficient, angular location and, Doppler shift of multiple moving targets. By exploiting the structure of the terms, the cost-function is brought into a form that allows us to apply the two-dimensional fast-Fourier-transform (2D-FFT) and reduce the computational complexity of estimation. Using low resolution 2D-FFT, the proposed algorithm identifies sub-optimal estimates and feeds them as initial points to the derived Newton gradient algorithm. In contrast to the grid-based search algorithms, the proposed algorithm can optimally estimate on- and off-the-grid targets in very low computational complexity. A new APES cost-function with better estimation performance is also discussed. Generalized expressions of the Cramér-Rao lower bound are derived to asses the performance of the proposed algorithm.

  20. Low Complexity Moving Target Parameter Estimation for MIMO Radar using 2D-FFT

    KAUST Repository

    Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim

    2017-01-01

    In multiple-input multiple-output radar, to localize a target and estimate its reflection coefficient, a given cost function is usually optimized over a grid of points. The performance of such algorithms is directly affected by the grid resolution. Increasing the number of grid points enhances the resolution of the estimator but also increases its computational complexity exponentially. In this work, two reduced complexity algorithms are derived based on Capon and amplitude and phase estimation (APES) to estimate the reflection coefficient, angular location and, Doppler shift of multiple moving targets. By exploiting the structure of the terms, the cost-function is brought into a form that allows us to apply the two-dimensional fast-Fourier-transform (2D-FFT) and reduce the computational complexity of estimation. Using low resolution 2D-FFT, the proposed algorithm identifies sub-optimal estimates and feeds them as initial points to the derived Newton gradient algorithm. In contrast to the grid-based search algorithms, the proposed algorithm can optimally estimate on- and off-the-grid targets in very low computational complexity. A new APES cost-function with better estimation performance is also discussed. Generalized expressions of the Cramér-Rao lower bound are derived to asses the performance of the proposed algorithm.

  1. Automatic Moving Object Segmentation for Freely Moving Cameras

    Directory of Open Access Journals (Sweden)

    Yanli Wan

    2014-01-01

    Full Text Available This paper proposes a new moving object segmentation algorithm for freely moving cameras which is very common for the outdoor surveillance system, the car build-in surveillance system, and the robot navigation system. A two-layer based affine transformation model optimization method is proposed for camera compensation purpose, where the outer layer iteration is used to filter the non-background feature points, and the inner layer iteration is used to estimate a refined affine model based on the RANSAC method. Then the feature points are classified into foreground and background according to the detected motion information. A geodesic based graph cut algorithm is then employed to extract the moving foreground based on the classified features. Unlike the existing global optimization or the long term feature point tracking based method, our algorithm only performs on two successive frames to segment the moving foreground, which makes it suitable for the online video processing applications. The experiment results demonstrate the effectiveness of our algorithm in both of the high accuracy and the fast speed.

  2. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  3. Synchronized moving aperture radiation therapy (SMART): average tumour trajectory for lung patients

    International Nuclear Information System (INIS)

    Neicu, Toni; Shirato, Hiroki; Seppenwoolde, Yvette; Jiang, Steve B

    2003-01-01

    Synchronized moving aperture radiation therapy (SMART) is a new technique for treating mobile tumours under development at Massachusetts General Hospital (MGH). The basic idea of SMART is to synchronize the moving radiation beam aperture formed by a dynamic multileaf collimator (DMLC) with the tumour motion induced by respiration. SMART is based on the concept of the average tumour trajectory (ATT) exhibited by a tumour during respiration. During the treatment simulation stage, tumour motion is measured and the ATT is derived. Then, the original IMRT MLC leaf sequence is modified using the ATT to compensate for tumour motion. During treatment, the tumour motion is monitored. The treatment starts when leaf motion and tumour motion are synchronized at a specific breathing phase. The treatment will halt when the tumour drifts away from the ATT and will resume when the synchronization between tumour motion and radiation beam is re-established. In this paper, we present a method to derive the ATT from measured tumour trajectory data. We also investigate the validity of the ATT concept for lung tumours during normal breathing. The lung tumour trajectory data were acquired during actual radiotherapy sessions using a real-time tumour-tracking system. SMART treatment is simulated by assuming that the radiation beam follows the derived ATT and the tumour follows the measured trajectory. In simulation, the treatment starts at exhale phase. The duty cycle of SMART delivery was calculated for various treatment times and gating thresholds, as well as for various exhale phases where the treatment begins. The simulation results show that in the case of free breathing, for 4 out of 11 lung datasets with tumour motion greater than 1 cm from peak to peak, the error in tumour tracking can be controlled to within a couple of millimetres while maintaining a reasonable delivery efficiency. That is to say, without any breath coaching/control, the ATT is a valid concept for some lung

  4. Nonlinear Autoregressive Network with the Use of a Moving Average Method for Forecasting Typhoon Tracks

    OpenAIRE

    Tienfuan Kerh; Shin-Hung Wu

    2017-01-01

    Forecasting of a typhoon moving path may help to evaluate the potential negative impacts in the neighbourhood areas along the moving path. This study proposed a work of using both static and dynamic neural network models to link a time series of typhoon track parameters including longitude and latitude of the typhoon central location, cyclonic radius, central wind speed, and typhoon moving speed. Based on the historical records of 100 typhoons, the performances of neural network models are ev...

  5. A comprehensive method for evaluating precision of transfer alignment on a moving base

    Science.gov (United States)

    Yin, Hongliang; Xu, Bo; Liu, Dezheng

    2017-09-01

    In this study, we propose the use of the Degree of Alignment (DOA) in engineering applications for evaluating the precision of and identifying the transfer alignment on a moving base. First, we derive the statistical formula on the basis of estimations. Next, we design a scheme for evaluating the transfer alignment on a moving base, for which the attitude error cannot be directly measured. Then, we build a mathematic estimation model and discuss Fixed Point Smoothing (FPS), Returns to Scale (RTS), Inverted Sequence Recursive Estimation (ISRE), and Kalman filter estimation methods, which can be used when evaluating alignment accuracy. Our theoretical calculations and simulated analyses show that the DOA reflects not only the alignment time and accuracy but also differences in the maneuver schemes, and is suitable for use as an integrated evaluation index. Furthermore, all four of these algorithms can be used to identify the transfer alignment and evaluate its accuracy. We recommend RTS in particular for engineering applications. Generalized DOAs should be calculated according to the tactical requirements.

  6. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    OpenAIRE

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    2010-01-01

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and nume...

  7. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  8. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  9. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  10. O Moving Average Convergence Convergence-Divergence como Ferramenta para a Decisão de Investimentos no Mercado de Ações

    Directory of Open Access Journals (Sweden)

    Rodrigo Silva Vidotto

    2009-04-01

    Full Text Available The increase in the number of investors at Bovespa since 2000 is due to stabilized inflation and falling interest rates. The use of tools that assist investors in selling and buying stocks is very important in a competitive and risky market. The technical analysis of stocks is used to search for trends in the movements of share prices and therefore indicate a suitable moment to buy or sell stocks. Among these technical indicators is the Moving Average Convergence-Divergence [MACD], which uses the concept of moving average in its equation and is considered by financial analysts as a simple tool to operate and analyze. This article aims to assess the effectiveness of the use of the MACD to indicate the moment to purchase and sell stocks in five companies – selected at random – a total of ninety companies in the Bovespa New Market and analyze the profitability gained during 2006, taking as a reference the valorization of the Ibovespa exchange in that year. The results show that the cumulative average return of the five companies was of 26.7% against a cumulative average return of 0.90% for Ibovespa.

  11. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  12. Forecasting Construction Tender Price Index in Ghana using Autoregressive Integrated Moving Average with Exogenous Variables Model

    Directory of Open Access Journals (Sweden)

    Ernest Kissi

    2018-03-01

    Full Text Available Prices of construction resources keep on fluctuating due to unstable economic situations that have been experienced over the years. Clients knowledge of their financial commitments toward their intended project remains the basis for their final decision. The use of construction tender price index provides a realistic estimate at the early stage of the project. Tender price index (TPI is influenced by various economic factors, hence there are several statistical techniques that have been employed in forecasting. Some of these include regression, time series, vector error correction among others. However, in recent times the integrated modelling approach is gaining popularity due to its ability to give powerful predictive accuracy. Thus, in line with this assumption, the aim of this study is to apply autoregressive integrated moving average with exogenous variables (ARIMAX in modelling TPI. The results showed that ARIMAX model has a better predictive ability than the use of the single approach. The study further confirms the earlier position of previous research of the need to use the integrated model technique in forecasting TPI. This model will assist practitioners to forecast the future values of tender price index. Although the study focuses on the Ghanaian economy, the findings can be broadly applicable to other developing countries which share similar economic characteristics.

  13. Estimation of pure moving average vector models | Usoro ...

    African Journals Online (AJOL)

    International Journal of Natural and Applied Sciences. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 3, No 3 (2007) >. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT ...

  14. Motion as a perturbation: Measurement-guided dose estimates to moving patient voxels during modulated arc deliveries

    Energy Technology Data Exchange (ETDEWEB)

    Feygelman, Vladimir; Zhang, Geoffrey; Hunt, Dylan; Opp, Daniel [Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida 33612 (United States); Stambaugh, Cassandra [Department of Physics, University of South Florida, Tampa, Florida 33612 (United States); Wolf, Theresa K. [Live Oak Technologies LLC, Kirkwood, Missouri 63122 (United States); Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States)

    2013-02-15

    Purpose: To present a framework for measurement-guided VMAT dose reconstruction to moving patient voxels from a known motion kernel and the static phantom data, and to validate this perturbation-based approach with the proof-of-principle experiments. Methods: As described previously, the VMAT 3D dose to a static patient can be estimated by applying a phantom measurement-guided perturbation to the treatment planning system (TPS)-calculated dose grid. The fraction dose to any voxel in the presence of motion, assuming the motion kernel is known, can be derived in a similar fashion by applying a measurement-guided motion perturbation. The dose to the diodes in a helical phantom is recorded at 50 ms intervals and is transformed into a series of time-resolved high-density volumetric dose grids. A moving voxel is propagated through this 4D dose space and the fraction dose to that voxel in the phantom is accumulated. The ratio of this motion-perturbed, reconstructed dose to the TPS dose in the phantom serves as a perturbation factor, applied to the TPS fraction dose to the similarly situated voxel in the patient. This approach was validated by the ion chamber and film measurements on four phantoms of different shape and structure: homogeneous and inhomogeneous cylinders, a homogeneous cube, and an anthropomorphic thoracic phantom. A 2D motion stage was used to simulate the motion. The stage position was synchronized with the beam start time with the respiratory gating simulator. The motion patterns were designed such that the motion speed was in the upper range of the expected tumor motion (1-1.4 cm/s) and the range exceeded the normally observed limits (up to 5.7 cm). The conformal arc plans for X or Y motion (in the IEC 61217 coordinate system) consisted of manually created narrow (3 cm) rectangular strips moving in-phase (tracking) or phase-shifted by 90 Degree-Sign (crossing) with respect to the phantom motion. The XY motion was tested with the computer-derived VMAT

  15. Motion as a perturbation: Measurement-guided dose estimates to moving patient voxels during modulated arc deliveries

    International Nuclear Information System (INIS)

    Feygelman, Vladimir; Zhang, Geoffrey; Hunt, Dylan; Opp, Daniel; Stambaugh, Cassandra; Wolf, Theresa K.; Nelms, Benjamin E.

    2013-01-01

    Purpose: To present a framework for measurement-guided VMAT dose reconstruction to moving patient voxels from a known motion kernel and the static phantom data, and to validate this perturbation-based approach with the proof-of-principle experiments. Methods: As described previously, the VMAT 3D dose to a static patient can be estimated by applying a phantom measurement-guided perturbation to the treatment planning system (TPS)-calculated dose grid. The fraction dose to any voxel in the presence of motion, assuming the motion kernel is known, can be derived in a similar fashion by applying a measurement-guided motion perturbation. The dose to the diodes in a helical phantom is recorded at 50 ms intervals and is transformed into a series of time-resolved high-density volumetric dose grids. A moving voxel is propagated through this 4D dose space and the fraction dose to that voxel in the phantom is accumulated. The ratio of this motion-perturbed, reconstructed dose to the TPS dose in the phantom serves as a perturbation factor, applied to the TPS fraction dose to the similarly situated voxel in the patient. This approach was validated by the ion chamber and film measurements on four phantoms of different shape and structure: homogeneous and inhomogeneous cylinders, a homogeneous cube, and an anthropomorphic thoracic phantom. A 2D motion stage was used to simulate the motion. The stage position was synchronized with the beam start time with the respiratory gating simulator. The motion patterns were designed such that the motion speed was in the upper range of the expected tumor motion (1–1.4 cm/s) and the range exceeded the normally observed limits (up to 5.7 cm). The conformal arc plans for X or Y motion (in the IEC 61217 coordinate system) consisted of manually created narrow (3 cm) rectangular strips moving in-phase (tracking) or phase-shifted by 90° (crossing) with respect to the phantom motion. The XY motion was tested with the computer-derived VMAT MLC

  16. Machine-Learning Based Channel Quality and Stability Estimation for Stream-Based Multichannel Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Waqas Rehan

    2016-09-01

    Full Text Available Wireless sensor networks (WSNs have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM, that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI and the average of the link quality indicator (LQI of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC algorithm in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC algorithm, that can perform channel quality estimation on the basis of both current and past values of channel rank estimation

  17. Machine-Learning Based Channel Quality and Stability Estimation for Stream-Based Multichannel Wireless Sensor Networks.

    Science.gov (United States)

    Rehan, Waqas; Fischer, Stefan; Rehan, Maaz

    2016-09-12

    Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end

  18. Shape and depth determinations from second moving average residual self-potential anomalies

    International Nuclear Information System (INIS)

    Abdelrahman, E M; El-Araby, T M; Essa, K S

    2009-01-01

    We have developed a semi-automatic method to determine the depth and shape (shape factor) of a buried structure from second moving average residual self-potential anomalies obtained from observed data using filters of successive window lengths. The method involves using a relationship between the depth and the shape to source and a combination of windowed observations. The relationship represents a parametric family of curves (window curves). For a fixed window length, the depth is determined for each shape factor. The computed depths are plotted against the shape factors, representing a continuous monotonically increasing curve. The solution for the shape and depth is read at the common intersection of the window curves. The validity of the method is tested on a synthetic example with and without random errors and on two field examples from Turkey and Germany. In all cases examined, the depth and the shape solutions obtained are in very good agreement with the true ones

  19. Average equilibrium charge state of 278113 ions moving in a helium gas

    International Nuclear Information System (INIS)

    Kaji, D.; Morita, K.; Morimoto, K.

    2005-01-01

    Difficulty to identify a new heavy element comes from the small production cross section. For example, the production cross section was about 0.5 pb in the case of searching for the 112th element produced by the cold fusion reaction of 208 Pb( 70 Zn,n) 277 ll2. In order to identify heavier elements than element 112, the experimental apparatus with a sensitivity of sub-pico barn level is essentially needed. A gas-filled recoil separator, in general, has a large collection efficiency compared with other recoil separators as seen from the operation principle of a gas-filled recoil separator. One of the most important parameters for a gas-filled recoil separator is the average equilibrium charge state q ave of ions moving in a used gas. This is because the recoil ion can not be properly transported to the focal plane of the separator, if the q ave of an element of interest in a gas is unknown. We have systematically measured equilibrium charge state distributions of heavy ions ( 169 Tm, 208 Pb, 193,209 Bi, 196 Po, 200 At, 203,204 Fr, 212 Ac, 234 Bk, 245 Fm, 254 No, 255 Lr, and 265 Hs) moving in a helium gas by using the gas-filled recoil separator GARIS at RIKEN. Ana then, the empirical formula on q ave of heavy ions in a helium gas was derived as a function of the velocity and the atomic number of an ion on the basis of the Tomas-Fermi model of the atom. The formula was found to be applicable to search for transactinide nuclides of 271 Ds, 272 Rg, and 277 112 produced by cold fusion reactions. Using the formula on q ave , we searched for a new isotope of element 113 produced by the cold fusion reaction of 209 Bi( 70 Zn,n) 278 113. As a result, a decay chain due to an evaporation residue of 278 113 was observed. Recently, we have successfully observed the 2nd decay chain due to an evaporation residue of 278 113. In this report, we will present experimental results in detail, and will also discuss the average equilibrium charge sate of 278 113 in a helium gas by

  20. FrFT-CSWSF: Estimating cross-range velocities of ground moving targets using multistatic synthetic aperture radar

    Directory of Open Access Journals (Sweden)

    Li Chenlei

    2014-10-01

    Full Text Available Estimating cross-range velocity is a challenging task for space-borne synthetic aperture radar (SAR, which is important for ground moving target indication (GMTI. Because the velocity of a target is very small compared with that of the satellite, it is difficult to correctly estimate it using a conventional monostatic platform algorithm. To overcome this problem, a novel method employing multistatic SAR is presented in this letter. The proposed hybrid method, which is based on an extended space-time model (ESTIM of the azimuth signal, has two steps: first, a set of finite impulse response (FIR filter banks based on a fractional Fourier transform (FrFT is used to separate multiple targets within a range gate; second, a cross-correlation spectrum weighted subspace fitting (CSWSF algorithm is applied to each of the separated signals in order to estimate their respective parameters. As verified through computer simulation with the constellations of Cartwheel, Pendulum and Helix, this proposed time-frequency-subspace method effectively improves the estimation precision of the cross-range velocities of multiple targets.

  1. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  2. The association between estimated average glucose levels and fasting plasma glucose levels

    Directory of Open Access Journals (Sweden)

    Giray Bozkaya

    2010-01-01

    Full Text Available OBJECTIVE: The level of hemoglobin A1c (HbA1c, also known as glycated hemoglobin, determines how well a patient's blood glucose level has been controlled over the previous 8-12 weeks. HbA1c levels help patients and doctors understand whether a particular diabetes treatment is working and whether adjustments need to be made to the treatment. Because the HbA1c level is a marker of blood glucose for the previous 120 days, average blood glucose levels can be estimated using HbA1c levels. Our aim in the present study was to investigate the relationship between estimated average glucose levels, as calculated by HbA1c levels, and fasting plasma glucose levels. METHODS: The fasting plasma glucose levels of 3891 diabetic patient samples (1497 male, 2394 female were obtained from the laboratory information system used for HbA1c testing by the Department of Internal Medicine at the Izmir Bozyaka Training and Research Hospital in Turkey. These samples were selected from patient samples that had hemoglobin levels between 12 and 16 g/dL. The estimated glucose levels were calculated using the following formula: 28.7 x HbA1c - 46.7. Glucose and HbA1c levels were determined using hexokinase and high performance liquid chromatography (HPLC methods, respectively. RESULTS: A strong positive correlation between fasting plasma glucose levels and estimated average blood glucose levels (r=0.757, p<0.05 was observed. The difference was statistically significant. CONCLUSION: Reporting the estimated average glucose level together with the HbA1c level is believed to assist patients and doctors determine the effectiveness of blood glucose control measures.

  3. Estimation of pulses in ultrasound B-scan images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1991-01-01

    It is shown, based on an expression for the received pressure field in pulsed medical ultrasound systems, that a common one-dimensional pulse can be estimated from individual A-lines. An autoregressive moving average (ARMA) model is suggested for the pulse, and an estimator based on the prediction...... error method is derived. The estimator is used on a segment of an A-line, assuming that the pulse does not change significantly inside the segment. Several examples of the use of the estimator on synthetic data measured from a tissue phantom and in vitro data measured from a calf's liver are given....... They show that a pulse can be estimated even at moderate signal-to-noise ratios...

  4. Output-feedback control of combined sewer networks through receding horizon control with moving horizon estimation

    Science.gov (United States)

    Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela

    2015-10-01

    An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solving these problems, preliminary Receding Horizon Control with Moving Horizon Estimation (RHC/MHE) results, based on flow measurements, were also obtained. In this work, the RHC/MHE algorithm has been extended to take into account both flow and water level measurements and the resulting control loop has been extensively simulated to assess the system performance according different measurement availability scenarios and rain events. All simulations have been carried out using a detailed physically based model of a real case-study network as virtual reality.

  5. Estimating the average grain size of metals - approved standard 1969

    International Nuclear Information System (INIS)

    Anon.

    1975-01-01

    These methods cover procedures for estimating and rules for expressing the average grain size of all metals and consisting entirely, or principally, of a single phase. The methods may also be used for any structures having appearances similar to those of the metallic structures shown in the comparison charts. The three basic procedures for grain size estimation which are discussed are comparison procedure, intercept (or Heyn) procedure, and planimetric (or Jeffries) procedure. For specimens consisting of equiaxed grains, the method of comparing the specimen with a standard chart is most convenient and is sufficiently accurate for most commercial purposes. For high degrees of accuracy in estimating grain size, the intercept or planimetric procedures may be used

  6. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  7. FPGA based computation of average neutron flux and e-folding period for start-up range of reactors

    International Nuclear Information System (INIS)

    Ram, Rajit; Borkar, S.P.; Dixit, M.Y.; Das, Debashis

    2013-01-01

    Pulse processing instrumentation channels used for reactor applications, play a vital role to ensure nuclear safety in startup range of reactor operation and also during fuel loading and first approach to criticality. These channels are intended for continuous run time computation of equivalent reactor core neutron flux and e-folding period. This paper focuses only the computational part of these instrumentation channels which is implemented in single FPGA using 32-bit floating point arithmetic engine. The computations of average count rate, log of average count rate, log rate and reactor period are done in VHDL using digital circuit realization approach. The computation of average count rate is done using fully adaptive window size moving average method, while Taylor series expansion for logarithms is implemented in FPGA to compute log of count rate, log rate and reactor e-folding period. This paper describes the block diagrams of digital logic realization in FPGA and advantage of fully adaptive window size moving average technique over conventional fixed size moving average technique for pulse processing of reactor instrumentations. (author)

  8. Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators.

    Science.gov (United States)

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  9. Hybrid Support Vector Regression and Autoregressive Integrated Moving Average Models Improved by Particle Swarm Optimization for Property Crime Rates Forecasting with Economic Indicators

    Directory of Open Access Journals (Sweden)

    Razana Alwee

    2013-01-01

    Full Text Available Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR and autoregressive integrated moving average (ARIMA to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  10. Experimental validation of heterogeneity-corrected dose-volume prescription on respiratory-averaged CT images in stereotactic body radiotherapy for moving tumors

    International Nuclear Information System (INIS)

    Nakamura, Mitsuhiro; Miyabe, Yuki; Matsuo, Yukinori; Kamomae, Takeshi; Nakata, Manabu; Yano, Shinsuke; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro

    2012-01-01

    The purpose of this study was to experimentally assess the validity of heterogeneity-corrected dose-volume prescription on respiratory-averaged computed tomography (RACT) images in stereotactic body radiotherapy (SBRT) for moving tumors. Four-dimensional computed tomography (CT) data were acquired while a dynamic anthropomorphic thorax phantom with a solitary target moved. Motion pattern was based on cos (t) with a constant respiration period of 4.0 sec along the longitudinal axis of the CT couch. The extent of motion (A 1 ) was set in the range of 0.0–12.0 mm at 3.0-mm intervals. Treatment planning with the heterogeneity-corrected dose-volume prescription was designed on RACT images. A new commercially available Monte Carlo algorithm of well-commissioned 6-MV photon beam was used for dose calculation. Dosimetric effects of intrafractional tumor motion were then investigated experimentally under the same conditions as 4D CT simulation using the dynamic anthropomorphic thorax phantom, films, and an ionization chamber. The passing rate of γ index was 98.18%, with the criteria of 3 mm/3%. The dose error between the planned and the measured isocenter dose in moving condition was within ± 0.7%. From the dose area histograms on the film, the mean ± standard deviation of the dose covering 100% of the cross section of the target was 102.32 ± 1.20% (range, 100.59–103.49%). By contrast, the irradiated areas receiving more than 95% dose for A 1 = 12 mm were 1.46 and 1.33 times larger than those for A 1 = 0 mm in the coronal and sagittal planes, respectively. This phantom study demonstrated that the cross section of the target received 100% dose under moving conditions in both the coronal and sagittal planes, suggesting that the heterogeneity-corrected dose-volume prescription on RACT images is acceptable in SBRT for moving tumors.

  11. Benefits of Dominance over Additive Models for the Estimation of Average Effects in the Presence of Dominance

    Directory of Open Access Journals (Sweden)

    Pascal Duenk

    2017-10-01

    Full Text Available In quantitative genetics, the average effect at a single locus can be estimated by an additive (A model, or an additive plus dominance (AD model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and identically distributed. Our objective was to investigate the accuracy of an estimated average effect (α^ in the presence of dominance, using either a single locus A-model or AD-model. Estimation was based on a finite sample from a large population in Hardy-Weinberg equilibrium (HWE, and the root mean squared error of α^ was calculated for several broad-sense heritabilities, sample sizes, and sizes of the dominance effect. Results show that with the A-model, both sampling deviations of genotype frequencies from HWE frequencies and sampling deviations of allele frequencies contributed to the error. With the AD-model, only sampling deviations of allele frequencies contributed to the error, provided that all three genotype classes were sampled. In the presence of dominance, the root mean squared error of α^ with the AD-model was always smaller than with the A-model, even when the heritability was less than one. Remarkably, in the absence of dominance, there was no disadvantage of fitting dominance. In conclusion, the AD-model yields more accurate estimates of average effects from a finite sample, because it is more robust against sampling deviations from HWE frequencies than the A-model. Genetic models that include dominance, therefore, yield higher accuracies of estimated average effects than purely additive models when dominance is present.

  12. SAR Ground Moving Target Indication Based on Relative Residue of DPCA Processing

    Directory of Open Access Journals (Sweden)

    Jia Xu

    2016-10-01

    Full Text Available For modern synthetic aperture radar (SAR, it has much more urgent demands on ground moving target indication (GMTI, which includes not only the point moving targets like cars, truck or tanks but also the distributed moving targets like river or ocean surfaces. Among the existing GMTI methods, displaced phase center antenna (DPCA can effectively cancel the strong ground clutter and has been widely used. However, its detection performance is closely related to the target’s signal-to-clutter ratio (SCR as well as radial velocity, and it cannot effectively detect the weak large-sized river surfaces in strong ground clutter due to their low SCR caused by specular scattering. This paper proposes a novel method called relative residue of DPCA (RR-DPCA, which jointly utilizes the DPCA cancellation outputs and the multi-look images to improve the detection performance of weak river surfaces. Furthermore, based on the statistics analysis of the RR-DPCA outputs on the homogenous background, the cell average (CA method can be well applied for subsequent constant false alarm rate (CFAR detection. The proposed RR-DPCA method can well detect the point moving targets and distributed moving targets simultaneously. Finally, the results of both simulated and real data are provided to demonstrate the effectiveness of the proposed SAR/GMTI method.

  13. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  14. Calculation of weighted averages approach for the estimation of ping tolerance values

    Science.gov (United States)

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  15. Reduced order ARMA spectral estimation of ocean waves

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Witz, J.A.; Lyons, G.J.

    . After selecting the initial model order based on the Akaike Information Criterion method, a novel model order reduction technique is applied to obtain the final reduced order ARMA model. First estimates of the higher order autoregressive coefficients... of the reduced order ARMA model is obtained. The moving average part is determined based on partial fraction and recursive methods. The above system identification models and model order reduction technique are shown here to be successfully applied...

  16. LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions

    Directory of Open Access Journals (Sweden)

    Weihua An

    2016-07-01

    Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.

  17. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  18. The GAAS Metagenomic Tool and Its Estimations of Viral and Microbial Average Genome Size in Four Major Biomes

    OpenAIRE

    Angly, Florent E.; Willner, Dana; Prieto-Dav?, Alejandra; Edwards, Robert A.; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A.; Barott, Katie; Cottrell, Matthew T.; Desnues, Christelle; Dinsdale, Elizabeth A.; Furlan, Mike; Haynes, Matthew; Henn, Matthew R.; Hu, Yongfei

    2009-01-01

    Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimate...

  19. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  20. Protocol for the estimation of average indoor radon-daughter concentrations: Second edition

    International Nuclear Information System (INIS)

    Langner, G.H. Jr.; Pacer, J.C.

    1988-05-01

    The Technical Measurements Center has developed a protocol which specifies the procedures to be used for determining indoor radon-daughter concentrations in support of Department of Energy remedial action programs. This document is the central part of the protocol and is to be used in conjunction with the individual procedure manuals. The manuals contain the information and procedures required to implement the proven methods for estimating average indoor radon-daughter concentration. Proven in this case means that these methods have been determined to provide reasonable assurance that the average radon-daughter concentration within a structure is either above, at, or below the standards established for remedial action programs. This document contains descriptions of the generic aspects of methods used for estimating radon-daughter concentration and provides guidance with respect to method selection for a given situation. It is expected that the latter section of this document will be revised whenever another estimation method is proven to be capable of satisfying the criteria of reasonable assurance and cost minimization. 22 refs., 6 figs., 3 tabs

  1. Improved Multiscale Entropy Technique with Nearest-Neighbor Moving-Average Kernel for Nonlinear and Nonstationary Short-Time Biomedical Signal Analysis

    Directory of Open Access Journals (Sweden)

    S. P. Arunachalam

    2018-01-01

    Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.

  2. Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories

    International Nuclear Information System (INIS)

    Vallisneri, Michele; Galley, Chad R

    2012-01-01

    The signal-to-noise ratio (SNR) is used in gravitational-wave observations as the basic figure of merit for detection confidence and, together with the Fisher matrix, for the amount of physical information that can be extracted from a detected signal. SNRs are usually computed from a sensitivity curve, which describes the gravitational-wave amplitude needed by a monochromatic source of given frequency to achieve a threshold SNR. Although the term 'sensitivity' is used loosely to refer to the detector's noise spectral density, the two quantities are not the same: the sensitivity includes also the frequency- and orientation-dependent response of the detector to gravitational waves and takes into account the duration of observation. For interferometric space-based detectors similar to LISA, which are sensitive to long-lived signals and have constantly changing position and orientation, exact SNRs need to be computed on a source-by-source basis. For convenience, most authors prefer to work with sky-averaged sensitivities, accepting inaccurate SNRs for individual sources and giving up control over the statistical distribution of SNRs for source populations. In this paper, we describe a straightforward end-to-end recipe to compute the non-sky-averaged sensitivity of interferometric space-based detectors of any geometry. This recipe includes the effects of spacecraft motion and of seasonal variations in the partially subtracted confusion foreground from Galactic binaries, and it can be used to generate a sampling distribution of sensitivities for a given source population. In effect, we derive error bars for the sky-averaged sensitivity curve, which provide a stringent statistical interpretation for previously unqualified statements about sky-averaged SNRs. As a worked-out example, we consider isotropic and Galactic-disk populations of monochromatic sources, as observed with the 'classic LISA' configuration. We confirm that the (standard) inverse-rms average sensitivity

  3. Meteorological Research Institute multivariate ocean variational estimation (MOVE) system: Some early results

    Science.gov (United States)

    Usui, Norihisa; Ishizaki, Shiro; Fujii, Yosuke; Tsujino, Hiroyuki; Yasuda, Tamaki; Kamachi, Masafumi

    The Meteorological Research Institute multivariate ocean variational estimation (MOVE) System has been developed as the next-generation ocean data assimilation system in Japan Meteorological Agency. A multivariate three-dimensional variational (3DVAR) analysis scheme with vertical coupled temperature salinity empirical orthogonal function modes is adopted. The MOVE system has two varieties, the global (MOVE-G) and North Pacific (MOVE-NP) systems. The equatorial Pacific and western North Pacific are analyzed with assimilation experiments using MOVE-G and -NP, respectively. In each system, the salinity and velocity fields are well reproduced, even in cases without salinity data. Changes in surface and subsurface zonal currents during the 1997/98 El Niño event are captured well, and their transports are reasonably consistent with in situ observations. For example, the eastward transport in the upper layer around the equator has 70 Sv in spring 1997 and weakens in spring 1998. With MOVE-NP, the Kuroshio transport has 25 Sv in the East China Sea, and 40 Sv crossing the ASUKA (Affiliated Surveys of the Kuroshio off Cape Ashizuri) line south of Japan. The variations in the Kuroshio transports crossing the ASUKA line agree well with observations. The Ryukyu Current System has a transport ranging from 6 Sv east of Taiwan to 17 Sv east of Amami. The Oyashio transport crossing the OICE (Oyashio Intensive observation line off Cape Erimo) line south of Hokkaido has 14 Sv southwestward (near shore) and 11 Sv northeastward (offshore). In the Kuroshio Oyashio transition area east of Japan, the eastward transport has 41 Sv (32 36°N) and 12 Sv (36 39°N) crossing the 145°E line.

  4. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  5. 3D shape measurement of moving object with FFT-based spatial matching

    Science.gov (United States)

    Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun

    2018-03-01

    This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.

  6. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    OpenAIRE

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-01-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have been emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rainfall amounts...

  7. Actuator disk model of wind farms based on the rotor average wind speed

    DEFF Research Database (Denmark)

    Han, Xing Xing; Xu, Chang; Liu, De You

    2016-01-01

    Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition of ...

  8. The application of moving average control charts for evaluating magnetic field quality on an individual magnet basis

    International Nuclear Information System (INIS)

    Pollock, D.A.; Gunst, R.F.; Schucany, W.R.

    1994-01-01

    SSC Collider Dipole Magnet field quality specifications define limits of variation for the population mean (Systematic) and standard deviation (RMS deviation) of allowed and unallowed multipole coefficients generated by the full collection of dipole magnets throughout the Collider operating cycle. A fundamental Quality Control issue is how to determine the acceptability of individual magnets during production, in other words taken one at a time and compared to the population parameters. Provided that the normal distribution assumptions hold, the random variation of multipoles for individual magnets may be evaluated by comparing the measured results to ± 3 x RMS tolerance, centered on the design nominal. To evaluate the local and cumulative systematic variation of the magnets against the distribution tolerance, individual magnet results need to be combined with others that come before it. This paper demonstrates a Statistical Quality Control method (the Unweighted Moving Average control chart) to evaluate individual magnet performance and process stability against population tolerances. The DESY/HERA Dipole cold skew quadrupole measurements for magnets in production order are used to evaluate non-stationarity of the mean over time for the cumulative set of magnets, as well as for a moving sample

  9. Ultrasound image based visual servoing for moving target ablation by high intensity focused ultrasound.

    Science.gov (United States)

    Seo, Joonho; Koizumi, Norihiro; Mitsuishi, Mamoru; Sugita, Naohiko

    2017-12-01

    Although high intensity focused ultrasound (HIFU) is a promising technology for tumor treatment, a moving abdominal target is still a challenge in current HIFU systems. In particular, respiratory-induced organ motion can reduce the treatment efficiency and negatively influence the treatment result. In this research, we present: (1) a methodology for integration of ultrasound (US) image based visual servoing in a HIFU system; and (2) the experimental results obtained using the developed system. In the visual servoing system, target motion is monitored by biplane US imaging and tracked in real time (40 Hz) by registration with a preoperative 3D model. The distance between the target and the current HIFU focal position is calculated in every US frame and a three-axis robot physically compensates for differences. Because simultaneous HIFU irradiation disturbs US target imaging, a sophisticated interlacing strategy was constructed. In the experiments, respiratory-induced organ motion was simulated in a water tank with a linear actuator and kidney-shaped phantom model. Motion compensation with HIFU irradiation was applied to the moving phantom model. Based on the experimental results, visual servoing exhibited a motion compensation accuracy of 1.7 mm (RMS) on average. Moreover, the integrated system could make a spherical HIFU-ablated lesion in the desired position of the respiratory-moving phantom model. We have demonstrated the feasibility of our US image based visual servoing technique in a HIFU system for moving target treatment. © 2016 The Authors The International Journal of Medical Robotics and Computer Assisted Surgery Published by John Wiley & Sons Ltd.

  10. The effects of parameter estimation on minimizing the in-control average sample size for the double sampling X bar chart

    Directory of Open Access Journals (Sweden)

    Michael B.C. Khoo

    2013-11-01

    Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.

  11. Intent-Estimation- and Motion-Model-Based Collision Avoidance Method for Autonomous Vehicles in Urban Environments

    Directory of Open Access Journals (Sweden)

    Rulin Huang

    2017-04-01

    Full Text Available Existing collision avoidance methods for autonomous vehicles, which ignore the driving intent of detected vehicles, thus, cannot satisfy the requirements for autonomous driving in urban environments because of their high false detection rates of collisions with vehicles on winding roads and the missed detection rate of collisions with maneuvering vehicles. This study introduces an intent-estimation- and motion-model-based (IEMMB method to address these disadvantages. First, a state vector is constructed by combining the road structure and the moving state of detected vehicles. A Gaussian mixture model is used to learn the maneuvering patterns of vehicles from collected data, and the patterns are used to estimate the driving intent of the detected vehicles. Then, a desirable long-term trajectory is obtained by weighting time and comfort. The long-term trajectory and the short-term trajectory, which are predicted using a constant yaw rate motion model, are fused to achieve an accurate trajectory. Finally, considering the moving state of the autonomous vehicle, collisions can be detected and avoided. Experiments have shown that the intent estimation method performed well, achieving an accuracy of 91.7% on straight roads and an accuracy of 90.5% on winding roads, which is much higher than that achieved by the method that ignores the road structure. The average collision detection distance is increased by more than 8 m. In addition, the maximum yaw rate and acceleration during an evasive maneuver are decreased, indicating an improvement in the driving comfort.

  12. Anticipating the effects of gravity when intercepting moving objects: differentiating up and down based on nonvisual cues.

    Science.gov (United States)

    Senot, Patrice; Zago, Myrka; Lacquaniti, Francesco; McIntyre, Joseph

    2005-12-01

    Intercepting an object requires a precise estimate of its time of arrival at the interception point (time to contact or "TTC"). It has been proposed that knowledge about gravitational acceleration can be combined with first-order, visual-field information to provide a better estimate of TTC when catching falling objects. In this experiment, we investigated the relative role of visual and nonvisual information on motor-response timing in an interceptive task. Subjects were immersed in a stereoscopic virtual environment and asked to intercept with a virtual racket a ball falling from above or rising from below. The ball moved with different initial velocities and could accelerate, decelerate, or move at a constant speed. Depending on the direction of motion, the acceleration or deceleration of the ball could therefore be congruent or not with the acceleration that would be expected due to the force of gravity acting on the ball. Although the best success rate was observed for balls moving at a constant velocity, we systematically found a cross-effect of ball direction and acceleration on success rate and response timing. Racket motion was triggered on average 25 ms earlier when the ball fell from above than when it rose from below, whatever the ball's true acceleration. As visual-flow information was the same in both cases, this shift indicates an influence of the ball's direction relative to gravity on response timing, consistent with the anticipation of the effects of gravity on the flight of the ball.

  13. Application of Depth-Averaged Velocity Profile for Estimation of Longitudinal Dispersion in Rivers

    Directory of Open Access Journals (Sweden)

    Mohammad Givehchi

    2010-01-01

    Full Text Available River bed profiles and depth-averaged velocities are used as basic data in empirical and analytical equations for estimating the longitudinal dispersion coefficient which has always been a topic of great interest for researchers. The simple model proposed by Maghrebi is capable of predicting the normalized isovel contours in the cross section of rivers and channels as well as the depth-averaged velocity profiles. The required data in Maghrebi’s model are bed profile, shear stress, and roughness distributions. Comparison of depth-averaged velocities and longitudinal dispersion coefficients observed in the field data and those predicted by Maghrebi’s model revealed that Maghrebi’s model had an acceptable accuracy in predicting depth-averaged velocity.

  14. A Web-Based Model to Estimate the Impact of Best Management Practices

    Directory of Open Access Journals (Sweden)

    Youn Shik Park

    2014-03-01

    Full Text Available The Spreadsheet Tool for the Estimation of Pollutant Load (STEPL can be used for Total Maximum Daily Load (TMDL processes, since the model is capable of simulating the impacts of various best management practices (BMPs and low impact development (LID practices. The model computes average annual direct runoff using the Soil Conservation Service Curve Number (SCS-CN method with average rainfall per event, which is not a typical use of the SCS-CN method. Five SCS-CN-based approaches to compute average annual direct runoff were investigated to explore estimated differences in average annual direct runoff computations using daily precipitation data collected from the National Climate Data Center and generated by the CLIGEN model for twelve stations in Indiana. Compared to the average annual direct runoff computed for the typical use of the SCS-CN method, the approaches to estimate average annual direct runoff within EPA STEPL showed large differences. A web-based model (STEPL WEB was developed with a corrected approach to estimate average annual direct runoff. Moreover, the model was integrated with the Web-based Load Duration Curve Tool, which identifies the least cost BMPs for each land use and optimizes BMP selection to identify the most cost-effective BMP implementations. The integrated tools provide an easy to use approach for performing TMDL analysis and identifying cost-effective approaches for controlling nonpoint source pollution.

  15. Moving charged particles in lattice Boltzmann-based electrokinetics

    Science.gov (United States)

    Kuron, Michael; Rempfer, Georg; Schornbaum, Florian; Bauer, Martin; Godenschwager, Christian; Holm, Christian; de Graaf, Joost

    2016-12-01

    The motion of ionic solutes and charged particles under the influence of an electric field and the ensuing hydrodynamic flow of the underlying solvent is ubiquitous in aqueous colloidal suspensions. The physics of such systems is described by a coupled set of differential equations, along with boundary conditions, collectively referred to as the electrokinetic equations. Capuani et al. [J. Chem. Phys. 121, 973 (2004)] introduced a lattice-based method for solving this system of equations, which builds upon the lattice Boltzmann algorithm for the simulation of hydrodynamic flow and exploits computational locality. However, thus far, a description of how to incorporate moving boundary conditions into the Capuani scheme has been lacking. Moving boundary conditions are needed to simulate multiple arbitrarily moving colloids. In this paper, we detail how to introduce such a particle coupling scheme, based on an analogue to the moving boundary method for the pure lattice Boltzmann solver. The key ingredients in our method are mass and charge conservation for the solute species and a partial-volume smoothing of the solute fluxes to minimize discretization artifacts. We demonstrate our algorithm's effectiveness by simulating the electrophoresis of charged spheres in an external field; for a single sphere we compare to the equivalent electro-osmotic (co-moving) problem. Our method's efficiency and ease of implementation should prove beneficial to future simulations of the dynamics in a wide range of complex nanoscopic and colloidal systems that were previously inaccessible to lattice-based continuum algorithms.

  16. Estimation of muscle fatigue by ratio of mean frequency to average rectified value from surface electromyography.

    Science.gov (United States)

    Fernando, Jeffry Bonar; Yoshioka, Mototaka; Ozawa, Jun

    2016-08-01

    A new method to estimate muscle fatigue quantitatively from surface electromyography (EMG) is proposed. The ratio of mean frequency (MNF) to average rectified value (ARV) is used as the index of muscle fatigue, and muscle fatigue is detected when MNF/ARV falls below a pre-determined or pre-calculated baseline. MNF/ARV gives larger distinction between fatigued muscle and non-fatigued muscle. Experiment results show the effectiveness of our method in estimating muscle fatigue more correctly compared to conventional methods. An early evaluation based on the initial value of MNF/ARV and the subjective time when the subjects start feeling the fatigue also indicates the possibility of calculating baseline from the initial value of MNF/ARV.

  17. New approaches to improve a WCDMA SIR estimator by employing different post-processing stages

    Directory of Open Access Journals (Sweden)

    Amnart Chaichoet

    2008-09-01

    Full Text Available For effective control of transmission power in WCDMA mobile systems, a good estimate of signal-to-interference ratio (SIR is needed. Conventionally, an adaptive SIR estimator employs a moving average (MA filter (Yoon et al., 2002 to encounter fading channel distortion. However, the resulting estimate seems to have high estimation error due to fluctuation in the channel variation. In this paper, an additional post-processing stage is proposed to improve the estimation accuracy by reducing the variation of the estimate. Four variations of post-processing stages, namely 1 a moving average (MA postfilter,2 an exponential moving average (EMA post-filter, 3 an IIR post-filter and 4 least-mean-squared (LMS adaptive post-filter, are proposed and their optimal performance in terms of root-mean-square error (RMSE are then compared by simulation. The results show the best comparable performance when the MA and LMS post-filter are used. However, the MA post-filter requires a lookup table of filter order for optimal performance at different channel conditions, while the LMS post-filter can be used conveniently without a lookup table.

  18. Consensus in averager-copier-voter networks of moving dynamical agents

    Science.gov (United States)

    Shang, Yilun

    2017-02-01

    This paper deals with a hybrid opinion dynamics comprising averager, copier, and voter agents, which ramble as random walkers on a spatial network. Agents exchange information following some deterministic and stochastic protocols if they reside at the same site in the same time. Based on stochastic stability of Markov chains, sufficient conditions guaranteeing consensus in the sense of almost sure convergence have been obtained. The ultimate consensus state is identified in the form of an ergodicity result. Simulation studies are performed to validate the effectiveness and availability of our theoretical results. The existence/non-existence of voters and the proportion of them are unveiled to play key roles during the consensus-reaching process.

  19. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    Science.gov (United States)

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later.

  20. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  1. State-of-the-Art Mobile Intelligence: Enabling Robots to Move Like Humans by Estimating Mobility with Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Xue-Bo Jin

    2018-03-01

    Full Text Available Mobility is a significant robotic task. It is the most important function when robotics is applied to domains such as autonomous cars, home service robots, and autonomous underwater vehicles. Despite extensive research on this topic, robots still suffer from difficulties when moving in complex environments, especially in practical applications. Therefore, the ability to have enough intelligence while moving is a key issue for the success of robots. Researchers have proposed a variety of methods and algorithms, including navigation and tracking. To help readers swiftly understand the recent advances in methodology and algorithms for robot movement, we present this survey, which provides a detailed review of the existing methods of navigation and tracking. In particular, this survey features a relation-based architecture that enables readers to easily grasp the key points of mobile intelligence. We first outline the key problems in robot systems and point out the relationship among robotics, navigation, and tracking. We then illustrate navigation using different sensors and the fusion methods and detail the state estimation and tracking models for target maneuvering. Finally, we address several issues of deep learning as well as the mobile intelligence of robots as suggested future research topics. The contributions of this survey are threefold. First, we review the literature of navigation according to the applied sensors and fusion method. Second, we detail the models for target maneuvering and the existing tracking based on estimation, such as the Kalman filter and its series developed form, according to their model-construction mechanisms: linear, nonlinear, and non-Gaussian white noise. Third, we illustrate the artificial intelligence approach—especially deep learning methods—and discuss its combination with the estimation method.

  2. Areal rainfall estimation using moving cars – computer experiments including hydrological modeling

    OpenAIRE

    E. Rabiei; U. Haberlandt; M. Sester; D. Fitzner; M. Wallner

    2016-01-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rai...

  3. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  4. Illuminant direction estimation for a single image based on local region complexity analysis and average gray value.

    Science.gov (United States)

    Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Compare, Angelo

    2014-01-10

    Illuminant direction estimation is an important research issue in the field of image processing. Due to low cost for getting texture information from a single image, it is worthwhile to estimate illuminant direction by employing scenario texture information. This paper proposes a novel computation method to estimate illuminant direction on both color outdoor images and the extended Yale face database B. In our paper, the luminance component is separated from the resized YCbCr image and its edges are detected with the Canny edge detector. Then, we divide the binary edge image into 16 local regions and calculate the edge level percentage in each of them. Afterward, we use the edge level percentage to analyze the complexity of each local region included in the luminance component. Finally, according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model, we calculate the illuminant directions of the luminance component's three local regions, which meet the requirements of lower complexity and larger average gray value, and synthesize them as the final illuminant direction. Unlike previous works, the proposed method requires neither all of the information of the image nor the texture that is included in the training set. Experimental results show that the proposed method works better at the correct rate and execution time than the existing ones.

  5. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  6. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  7. Heterogeneous CPU-GPU moving targets detection for UAV video

    Science.gov (United States)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  8. Strips of hourly power options. Approximate hedging using average-based forward contracts

    International Nuclear Information System (INIS)

    Lindell, Andreas; Raab, Mikael

    2009-01-01

    We study approximate hedging strategies for a contingent claim consisting of a strip of independent hourly power options. The payoff of the contingent claim is a sum of the contributing hourly payoffs. As there is no forward market for specific hours, the fundamental problem is to find a reasonable hedge using exchange-traded forward contracts, e.g. average-based monthly contracts. The main result is a simple dynamic hedging strategy that reduces a significant part of the variance. The idea is to decompose the contingent claim into mathematically tractable components and to use empirical estimations to derive hedging deltas. Two benefits of the method are that the technique easily extends to more complex power derivatives and that only a few parameters need to be estimated. The hedging strategy based on the decomposition technique is compared with dynamic delta hedging strategies based on local minimum variance hedging, using a correlated traded asset. (author)

  9. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  10. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  11. Comparing a recursive digital filter with the moving-average and sequential probability-ratio detection methods for SNM portal monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.

    1993-01-01

    The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal

  12. Estimating evaporative vapor generation from automobiles based on parking activities

    International Nuclear Information System (INIS)

    Dong, Xinyi; Tschantz, Michael; Fu, Joshua S.

    2015-01-01

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade–Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5–8% less than calculation without considering parking activity. - Highlights: • We applied real parking distribution data to estimate evaporative vapor generation. • We applied real hourly temperature data to estimate hourly incremental vapor generation rate. • Evaporative emission for Florence is estimated based on parking distribution and hourly rate. - A new approach is proposed to quantify the weighted evaporative vapor generation based on parking distribution with an hourly incremental vapor generation rate

  13. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    Science.gov (United States)

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-09-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rain rate. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e., RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance but also for use in hydrological modeling. Considering measurement errors derived from laboratory experiments, the result shows that the RCs provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Moreover, by testing larger uncertainties for RCs, they observed to be useful up to a certain level for areal rainfall estimation and discharge simulation.

  14. Fast LCMV-based Methods for Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll

    2013-01-01

    peaks and require matrix inversions for each point in the search grid. In this paper, we therefore consider fast implementations of LCMV-based fundamental frequency estimators, exploiting the estimators' inherently low displacement rank of the used Toeplitz-like data covariance matrices, using...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...

  15. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  16. On the Nature of SEM Estimates of ARMA Parameters.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  17. Translating HbA1c measurements into estimated average glucose values in pregnant women with diabetes

    DEFF Research Database (Denmark)

    Law, Graham R; Gilthorpe, Mark S; Secher, Anna L

    2017-01-01

    AIMS/HYPOTHESIS: This study aimed to examine the relationship between average glucose levels, assessed by continuous glucose monitoring (CGM), and HbA1c levels in pregnant women with diabetes to determine whether calculations of standard estimated average glucose (eAG) levels from HbA1c measureme...

  18. Time of arrival based location estimation for cooperative relay networks

    KAUST Repository

    Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Hussain, Syed Imtiaz; Qaraqe, Khalid A.; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.

  19. Time of arrival based location estimation for cooperative relay networks

    KAUST Repository

    Çelebi, Hasari Burak

    2010-09-01

    In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.

  20. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-05-14

    This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  1. Mobile Position Estimation using Artificial Neural Network in CDMA Cellular Systems

    Directory of Open Access Journals (Sweden)

    Omar Waleed Abdulwahhab

    2017-01-01

    Full Text Available Using the Neural network as a type of associative memory will be introduced in this paper through the problem of mobile position estimation where mobile estimate its location depending on the signal strength reach to it from several around base stations where the neural network can be implemented inside the mobile. Traditional methods of time of arrival (TOA and received signal strength (RSS are used and compared with two analytical methods, optimal positioning method and average positioning method. The data that are used for training are ideal since they can be obtained based on geometry of CDMA cell topology. The test of the two methods TOA and RSS take many cases through a nonlinear path that MS can move through that region. The results show that the neural network has good performance compared with two other analytical methods which are average positioning method and optimal positioning method.

  2. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    Science.gov (United States)

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  3. Identification and estimation of survivor average causal effects.

    Science.gov (United States)

    Tchetgen Tchetgen, Eric J

    2014-09-20

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  4. The GAAS metagenomic tool and its estimations of viral and microbial average genome size in four major biomes.

    Science.gov (United States)

    Angly, Florent E; Willner, Dana; Prieto-Davó, Alejandra; Edwards, Robert A; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A; Barott, Katie; Cottrell, Matthew T; Desnues, Christelle; Dinsdale, Elizabeth A; Furlan, Mike; Haynes, Matthew; Henn, Matthew R; Hu, Yongfei; Kirchman, David L; McDole, Tracey; McPherson, John D; Meyer, Folker; Miller, R Michael; Mundt, Egbert; Naviaux, Robert K; Rodriguez-Mueller, Beltran; Stevens, Rick; Wegley, Linda; Zhang, Lixin; Zhu, Baoli; Rohwer, Forest

    2009-12-01

    Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites) suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and environmental conditions.

  5. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    Energy Technology Data Exchange (ETDEWEB)

    Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  6. Blood velocity estimation using spatio-temporal encoding based on frequency division approach

    DEFF Research Database (Denmark)

    Gran, Fredrik; Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2005-01-01

    In this paper a feasibility study of using a spatial encoding technique based on frequency division for blood flow estimation is presented. The spatial encoding is carried out by dividing the available bandwidth of the transducer into a number of narrow frequency bands with approximately disjoint...... spectral support. By assigning one band to one virtual source, all virtual sources can be excited simultaneously. The received echoes are beamformed using Synthetic Transmit Aperture beamforming. The velocity of the moving blood is estimated using a cross- correlation estimator. The simulation tool Field...

  7. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    estimated using least squares and Newton Raphson iterative methods. To determine the order of the ... r is the degree of polynomial while j is the number of lag of the ..... use a real time series dataset, monthly rainfall and temperature series ...

  8. Output-Only Modal Parameter Recursive Estimation of Time-Varying Structures via a Kernel Ridge Regression FS-TARMA Approach

    Directory of Open Access Journals (Sweden)

    Zhi-Sai Ma

    2017-01-01

    Full Text Available Modal parameter estimation plays an important role in vibration-based damage detection and is worth more attention and investigation, as changes in modal parameters are usually being used as damage indicators. This paper focuses on the problem of output-only modal parameter recursive estimation of time-varying structures based upon parameterized representations of the time-dependent autoregressive moving average (TARMA. A kernel ridge regression functional series TARMA (FS-TARMA recursive identification scheme is proposed and subsequently employed for the modal parameter estimation of a numerical three-degree-of-freedom time-varying structural system and a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudolinear regression FS-TARMA approach via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics in a recursive manner.

  9. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  10. A moving blocker-based strategy for simultaneous megavoltage and kilovoltage scatter correction in cone-beam computed tomography image acquired during volumetric modulated arc therapy

    International Nuclear Information System (INIS)

    Ouyang, Luo; Lee, Huichen Pam; Wang, Jing

    2015-01-01

    Purpose: To evaluate a moving blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods and materials: During the concurrent CBCT/VMAT acquisition, a physical attenuator (i.e., “blocker”) consisting of equally spaced lead strips was mounted and moved constantly between the CBCT source and patient. Both kV and MV scatter signals were estimated from the blocked region of the imaging panel, and interpolated into the unblocked region. A scatter corrected CBCT was then reconstructed from the unblocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan® phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using a moving blocker for kV–MV scatter correction. Results: Scatter induced cupping artifacts were substantially reduced in the moving blocker corrected CBCT images. Quantitatively, the root mean square error of Hounsfield units (HU) in seven density inserts of the Catphan phantom was reduced from 395 to 40. Conclusions: The proposed moving blocker strategy greatly improves the image quality of CBCT acquired with concurrent VMAT by reducing the kV–MV scatter induced HU inaccuracy and cupping artifacts

  11. Data-based depth estimation of an incoming autonomous underwater vehicle.

    Science.gov (United States)

    Yang, T C; Xu, Wen

    2016-10-01

    The data-based method for estimating the depth of a moving source is demonstrated experimentally for an incoming autonomous underwater vehicle traveling toward a vertical line array (VLA) of receivers at constant speed/depth. The method assumes no information on the sound-speed and bottom profile. Performing a wavenumber analysis of a narrowband signal for each hydrophone, the energy of the (modal) spectral peaks as a function of the receiver depth is used to estimate the depth of the source, traveling within the depth span of the VLA. This paper reviews the theory, discusses practical implementation issues, and presents the data analysis results.

  12. Alternative Estimates of the Reliability of College Grade Point Averages. Professional File. Article 130, Spring 2013

    Science.gov (United States)

    Saupe, Joe L.; Eimers, Mardy T.

    2013-01-01

    The purpose of this paper is to explore differences in the reliabilities of cumulative college grade point averages (GPAs), estimated for unweighted and weighted, one-semester, 1-year, 2-year, and 4-year GPAs. Using cumulative GPAs for a freshman class at a major university, we estimate internal consistency (coefficient alpha) reliabilities for…

  13. The GAAS metagenomic tool and its estimations of viral and microbial average genome size in four major biomes.

    Directory of Open Access Journals (Sweden)

    Florent E Angly

    2009-12-01

    Full Text Available Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS, a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and

  14. Moving Zimbabwe Forward : an Evidence Based Policy Dialogue ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Moving Zimbabwe Forward : an Evidence Based Policy Dialogue ... levels of poverty, unemployment, inflation and poor service provision in the areas of education, ... International Water Resources Association, in close collaboration with IDRC, ...

  15. Calculations of the electron-damping force on moving-edge dislocations

    International Nuclear Information System (INIS)

    Mohri, T.

    1982-11-01

    Dynamic effect of a moving dislocation has been recognized as one of essential features of deformation behavior at very low temperatures. Damping mechanisms are the central problems in this field. Based on the free-electron-gas model, the electron-damping force (friction force) on a moving-edge dislocation in a normal state is estimated. By applying classical MacKenzie-Sondheimer's procedures, the electrical resistivity caused by a moving dislocation is first estimated, and the damping force is calculated as a Joule-heat-energy dissipation. The calculated values are 3.63x10 - 6 , 7.62x10 - 7 and 1.00x10 - 6 [dyn sec/cm - 2 ] for Al, Cu and Pb, respectively. These values show fairly good agreements as compared with experimental results. Also, numerical calculations are carried out to estimate magnetic effects caused by a moving dislocation. The results are negative and any magnetic effects are not expected. In order to treat deformation behavior at very low temperatures, a unification of three important deformation problems is attempted and a fundamental equation is derived

  16. High-global warming potential F-gas emissions in California: comparison of ambient-based versus inventory-based emission estimates, and implications of refined estimates.

    Science.gov (United States)

    Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy

    2014-01-21

    To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.

  17. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  18. A Modelling Framework for estimating Road Segment Based On-Board Vehicle Emissions

    International Nuclear Information System (INIS)

    Lin-Jun, Yu; Ya-Lan, Liu; Yu-Huan, Ren; Zhong-Ren, Peng; Meng, Liu Meng

    2014-01-01

    Traditional traffic emission inventory models aim to provide overall emissions at regional level which cannot meet planners' demand for detailed and accurate traffic emissions information at the road segment level. Therefore, a road segment-based emission model for estimating light duty vehicle emissions is proposed, where floating car technology is used to collect information of traffic condition of roads. The employed analysis framework consists of three major modules: the Average Speed and the Average Acceleration Module (ASAAM), the Traffic Flow Estimation Module (TFEM) and the Traffic Emission Module (TEM). The ASAAM is used to obtain the average speed and the average acceleration of the fleet on each road segment using FCD. The TFEM is designed to estimate the traffic flow of each road segment in a given period, based on the speed-flow relationship and traffic flow spatial distribution. Finally, the TEM estimates emissions from each road segment, based on the results of previous two modules. Hourly on-road light-duty vehicle emissions for each road segment in Shenzhen's traffic network are obtained using this analysis framework. The temporal-spatial distribution patterns of the pollutant emissions of road segments are also summarized. The results show high emission road segments cluster in several important regions in Shenzhen. Also, road segments emit more emissions during rush hours than other periods. The presented case study demonstrates that the proposed approach is feasible and easy-to-use to help planners make informed decisions by providing detailed road segment-based emission information

  19. Estimating average shock pressures recorded by impactite samples based on universal stage investigations of planar deformation features in quartz - Sources of error and recommendations

    Science.gov (United States)

    Holm-Alwmark, S.; Ferrière, L.; Alwmark, C.; Poelchau, M. H.

    2018-01-01

    Planar deformation features (PDFs) in quartz are the most widely used indicator of shock metamorphism in terrestrial rocks. They can also be used for estimating average shock pressures that quartz-bearing rocks have been subjected to. Here we report on a number of observations and problems that we have encountered when performing universal stage measurements and crystallographically indexing of PDF orientations in quartz. These include a comparison between manual and automated methods of indexing PDFs, an evaluation of the new stereographic projection template, and observations regarding the PDF statistics related to the c-axis position and rhombohedral plane symmetry. We further discuss the implications that our findings have for shock barometry studies. Our study shows that the currently used stereographic projection template for indexing PDFs in quartz might induce an overestimation of rhombohedral planes with low Miller-Bravais indices. We suggest, based on a comparison of different shock barometry methods, that a unified method of assigning shock pressures to samples based on PDFs in quartz is necessary to allow comparison of data sets. This method needs to take into account not only the average number of PDF sets/grain but also the number of high Miller-Bravais index planes, both of which are important factors according to our study. Finally, we present a suggestion for such a method (which is valid for nonporous quartz-bearing rock types), which consists of assigning quartz grains into types (A-E) based on the PDF orientation pattern, and then calculation of a mean shock pressure for each sample.

  20. Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns

    DEFF Research Database (Denmark)

    Gonçalves, Sílvia; Hounyo, Ulrich; Meddahi, Nour

    The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias......-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our...... Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately. We use empirical work to illustrate its use in practice....

  1. Simple Moving Voltage Average Incremental Conductance MPPT Technique with Direct Control Method under Nonuniform Solar Irradiance Conditions

    Directory of Open Access Journals (Sweden)

    Amjad Ali

    2015-01-01

    Full Text Available A new simple moving voltage average (SMVA technique with fixed step direct control incremental conductance method is introduced to reduce solar photovoltaic voltage (VPV oscillation under nonuniform solar irradiation conditions. To evaluate and validate the performance of the proposed SMVA method in comparison with the conventional fixed step direct control incremental conductance method under extreme conditions, different scenarios were simulated. Simulation results show that in most cases SMVA gives better results with more stability as compared to traditional fixed step direct control INC with faster tracking system along with reduction in sustained oscillations and possesses fast steady state response and robustness. The steady state oscillations are almost eliminated because of extremely small dP/dV around maximum power (MP, which verify that the proposed method is suitable for standalone PV system under extreme weather conditions not only in terms of bus voltage stability but also in overall system efficiency.

  2. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  3. The association between estimated average glucose levels and fasting plasma glucose levels in a rural tertiary care centre

    Directory of Open Access Journals (Sweden)

    Raja Reddy P

    2013-01-01

    Full Text Available The level of hemoglobin A1c (HbA1c, also known as glycated hemoglobin, determines how well a patient’s blood glucose level has been controlled over the previous 8-12 weeks. HbA1c levels help patients and doctors understand whether a particular diabetes treatment is working and whether adjustments need to be made to the treatment. Because the HbA1c level is a marker of blood glucose for the previous 60- 90 days, average blood glucose levels can be estimated using HbA1c levels. Aim in the present study was to investigate the relationship between estimated average glucose levels, as calculated by HbA1c levels, and fasting plasma glucose levels. Methods: Type 2 diabetes patients attending medicine outpatient department of RL Jalappa hospital, Kolar between March 2010 and July 2012 were taken. The estimated glucose levels (mg/dl were calculated using the following formula: 28.7 x HbA1c-46.7. Glucose levels were determined using the hexokinase method. HbA1c levels were determined using an HPLC method. Correlation and independent t- test was the test of significance for quantitative data. Results: A strong positive correlation between fasting plasma glucose level and estimated average blood glucose levels (r=0.54, p=0.0001 was observed. The difference was statistically significant. Conclusion: Reporting the estimated average glucose level together with the HbA1c level is believed to assist patients and doctors determine the effectiveness of blood glucose control measures.

  4. Moving object detection using dynamic motion modelling from UAV aerial images.

    Science.gov (United States)

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  5. Multi-pulse orbits and chaotic dynamics in motion of parametrically excited viscoelastic moving belt

    International Nuclear Information System (INIS)

    Zhang Wei; Yao Minghui

    2006-01-01

    In this paper, the Shilnikov type multi-pulse orbits and chaotic dynamics of parametrically excited viscoelastic moving belt are studied in detail. Using Kelvin-type viscoelastic constitutive law, the equations of motion for viscoelastic moving belt with the external damping and parametric excitation are given. The four-dimensional averaged equation under the case of primary parametric resonance is obtained by directly using the method of multiple scales and Galerkin's approach to the partial differential governing equation of viscoelastic moving belt. From the averaged equations obtained here, the theory of normal form is used to give the explicit expressions of normal form with a double zero and a pair of pure imaginary eigenvalues. Based on normal form, the energy-phrase method is employed to analyze the global bifurcations and chaotic dynamics in parametrically excited viscoelastic moving belt. The global bifurcation analysis indicates that there exist the heteroclinic bifurcations and the Silnikov type multi-pulse homoclinic orbits in the averaged equation. The results obtained above mean the existence of the chaos for the Smale horseshoe sense in parametrically excited viscoelastic moving belt. The chaotic motions of viscoelastic moving belts are also found by using numerical simulation. A new phenomenon on the multi-pulse jumping orbits is observed from three-dimensional phase space

  6. A landslide-quake detection algorithm with STA/LTA and diagnostic functions of moving average and scintillation index: A preliminary case study of the 2009 Typhoon Morakot in Taiwan

    Science.gov (United States)

    Wu, Yu-Jie; Lin, Guan-Wei

    2017-04-01

    Since 1999, Taiwan has experienced a rapid rise in the number of landslides, and the number even reached a peak after the 2009 Typhoon Morakot. Although it is proved that the ground-motion signals induced by slope processes could be recorded by seismograph, it is difficult to be distinguished from continuous seismic records due to the lack of distinct P and S waves. In this study, we combine three common seismic detectors including the short-term average/long-term average (STA/LTA) approach, and two diagnostic functions of moving average and scintillation index. Based on these detectors, we have established an auto-detection algorithm of landslide-quakes and the detection thresholds are defined to distinguish landslide-quake from earthquakes and background noises. To further improve the proposed detection algorithm, we apply it to seismic archives recorded by Broadband Array in Taiwan for Seismology (BATS) during the 2009 Typhoon Morakots and consequently the discrete landslide-quakes detected by the automatic algorithm are located. The detection algorithm show that the landslide-detection results are consistent with that of visual inspection and hence can be used to automatically monitor landslide-quakes.

  7. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  8. Procedure manual for the estimation of average indoor radon-daughter concentrations using the filtered alpha-track method

    International Nuclear Information System (INIS)

    George, J.L.

    1988-04-01

    One of the measurement needs of US Department of Energy (DOE) remedial action programs is the estimation of the annual-average indoor radon-daughter concentration (RDC) in structures. The filtered alpha-track method, using a 1-year exposure period, can be used to accomplish RDC estimations for the DOE remedial action programs. This manual describes the procedure used to obtain filtered alpha-track measurements to derive average RDC estimates from the measurrements. Appropriate quality-assurance and quality-control programs are also presented. The ''prompt'' alpha-track method of exposing monitors for 2 to 6 months during specific periods of the year is also briefly discussed in this manual. However, the prompt alpha-track method has been validated only for use in the Mesa County, Colorado, area. 3 refs., 3 figs

  9. Grid occupancy estimation for environment perception based on belief functions and PCR6

    Science.gov (United States)

    Moras, Julien; Dezert, Jean; Pannetier, Benjamin

    2015-05-01

    In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.

  10. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  11. PERAMALAN DERET WAKTU MENGGUNAKAN MODEL FUNGSI BASIS RADIAL (RBF DAN AUTO REGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA

    Directory of Open Access Journals (Sweden)

    DT Wiyanti

    2013-07-01

    Full Text Available Salah satu metode peramalan yang paling dikembangkan saat ini adalah time series, yakni menggunakan pendekatan kuantitatif dengan data masa lampau yang dijadikan acuan untuk peramalan masa depan. Berbagai penelitian telah mengusulkan metode-metode untuk menyelesaikan time series, di antaranya statistik, jaringan syaraf, wavelet, dan sistem fuzzy. Metode-metode tersebut memiliki kekurangan dan keunggulan yang berbeda. Namun permasalahan yang ada dalam dunia nyata merupakan masalah yang kompleks. Satu metode saja mungkin tidak mampu mengatasi masalah tersebut dengan baik. Dalam artikel ini dibahas penggabungan dua buah metode yaitu Auto Regressive Integrated Moving Average (ARIMA dan Radial Basis Function (RBF. Alasan penggabungan kedua metode ini adalah karena adanya asumsi bahwa metode tunggal tidak dapat secara total mengidentifikasi semua karakteristik time series. Pada artikel ini dibahas peramalan terhadap data Indeks Harga Perdagangan Besar (IHPB dan data inflasi komoditi Indonesia; kedua data berada pada rentang tahun 2006 hingga beberapa bulan di tahun 2012. Kedua data tersebut masing-masing memiliki enam variabel. Hasil peramalan metode ARIMA-RBF dibandingkan dengan metode ARIMA dan metode RBF secara individual. Hasil analisa menunjukkan bahwa dengan metode penggabungan ARIMA dan RBF, model yang diberikan memiliki hasil yang lebih akurat dibandingkan dengan penggunaan salah satu metode saja. Hal ini terlihat dalam visual plot, MAPE, dan RMSE dari semua variabel pada dua data uji coba. The accuracy of time series forecasting is the subject of many decision-making processes. Time series use a quantitative approach to employ data from the past to make forecast for the future. Many researches have proposed several methods to solve time series, such as using statistics, neural networks, wavelets, and fuzzy systems. These methods have different advantages and disadvantages. But often the problem in the real world is just too complex that a

  12. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  13. Research on moving object detection based on frog's eyes

    Science.gov (United States)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  14. CHAOS: An SDN-Based Moving Target Defense System

    Directory of Open Access Journals (Sweden)

    Yuan Shi

    2017-01-01

    Full Text Available Moving target defense (MTD has provided a dynamic and proactive network defense to reduce or move the attack surface that is available for exploitation. However, traditional network is difficult to realize dynamic and active security defense effectively and comprehensively. Software-defined networking (SDN points out a brand-new path for building dynamic and proactive defense system. In this paper, we propose CHAOS, an SDN-based MTD system. Utilizing the programmability and flexibility of SDN, CHAOS obfuscates the attack surface including host mutation obfuscation, ports obfuscation, and obfuscation based on decoy servers, thereby enhancing the unpredictability of the networking environment. We propose the Chaos Tower Obfuscation (CTO method, which uses the Chaos Tower Structure (CTS to depict the hierarchy of all the hosts in an intranet and define expected connection and unexpected connection. Moreover, we develop fast CTO algorithms to achieve a different degree of obfuscation for the hosts in each layer. We design and implement CHAOS as an application of SDN controller. Our approach makes it very easy to realize moving target defense in networks. Our experimental results show that a network protected by CHAOS is capable of decreasing the percentage of information disclosure effectively to guarantee the normal flow of traffic.

  15. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A Semantic-Based Indexing for Indoor Moving Objects

    OpenAIRE

    Tingting Ben; Xiaolin Qin; Ning Wang

    2014-01-01

    The increasing availability of indoor positioning, driven by techniques like RFID, Bluetooth, and smart phones, enables a variety of indoor location-based services (LBSs). Efficient queries based on semantic-constraint in indoor spaces play an important role in supporting and boosting LBSs. However, the existing indoor index techniques cannot support these semantic constraints-based queries. To solve this problem, this paper addresses the challenge of indexing moving objects in indoor spaces,...

  17. Coordinated control of micro-grid based on distributed moving horizon control.

    Science.gov (United States)

    Ma, Miaomiao; Shao, Liyang; Liu, Xiangjie

    2018-05-01

    This paper proposed the distributed moving horizon coordinated control scheme for the power balance and economic dispatch problems of micro-grid based on distributed generation. We design the power coordinated controller for each subsystem via moving horizon control by minimizing a suitable objective function. The objective function of distributed moving horizon coordinated controller is chosen based on the principle that wind power subsystem has the priority to generate electricity while photovoltaic power generation coordinates with wind power subsystem and the battery is only activated to meet the load demand when necessary. The simulation results illustrate that the proposed distributed moving horizon coordinated controller can allocate the output power of two generation subsystems reasonably under varying environment conditions, which not only can satisfy the load demand but also limit excessive fluctuations of output power to protect the power generation equipment. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  19. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    Science.gov (United States)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  20. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  1. Averaging, not internal noise, limits the development of coherent motion processing

    Directory of Open Access Journals (Sweden)

    Catherine Manning

    2014-10-01

    Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.

  2. Estimation of average causal effect using the restricted mean residual lifetime as effect measure

    DEFF Research Database (Denmark)

    Mansourvar, Zahra; Martinussen, Torben

    2017-01-01

    with respect to their survival times. In observational studies where the factor of interest is not randomized, covariate adjustment is needed to take into account imbalances in confounding factors. In this article, we develop an estimator for the average causal treatment difference using the restricted mean...... residual lifetime as target parameter. We account for confounding factors using the Aalen additive hazards model. Large sample property of the proposed estimator is established and simulation studies are conducted in order to assess small sample performance of the resulting estimator. The method is also......Although mean residual lifetime is often of interest in biomedical studies, restricted mean residual lifetime must be considered in order to accommodate censoring. Differences in the restricted mean residual lifetime can be used as an appropriate quantity for comparing different treatment groups...

  3. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  4. Tracking 3D Moving Objects Based on GPS/IMU Navigation Solution, Laser Scanner Point Cloud and GIS Data

    Directory of Open Access Journals (Sweden)

    Siavash Hosseinyalamdary

    2015-07-01

    Full Text Available Monitoring vehicular road traffic is a key component of any autonomous driving platform. Detecting moving objects, and tracking them, is crucial to navigating around objects and predicting their locations and trajectories. Laser sensors provide an excellent observation of the area around vehicles, but the point cloud of objects may be noisy, occluded, and prone to different errors. Consequently, object tracking is an open problem, especially for low-quality point clouds. This paper describes a pipeline to integrate various sensor data and prior information, such as a Geospatial Information System (GIS map, to segment and track moving objects in a scene. We show that even a low-quality GIS map, such as OpenStreetMap (OSM, can improve the tracking accuracy, as well as decrease processing time. A bank of Kalman filters is used to track moving objects in a scene. In addition, we apply non-holonomic constraint to provide a better orientation estimation of moving objects. The results show that moving objects can be correctly detected, and accurately tracked, over time, based on modest quality Light Detection And Ranging (LiDAR data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS and Inertial Measurement Unit (IMU navigation solution.

  5. Statistical comparison of models for estimating the monthly average daily diffuse radiation at a subtropical African site

    International Nuclear Information System (INIS)

    Bashahu, M.

    2003-01-01

    Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H d ) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K d and N/N d , Ne/8 or K t . Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)

  6. Moving from Rule-based to Principle-based in Public Sector: Preparers' Perspective

    OpenAIRE

    Roshayani Arshad; Normah Omar; Siti Fatimah Awang

    2013-01-01

    The move from cash accounting to accrual accounting, or rule-based to principle-based accounting, by many governments is part of an ongoing efforts in promoting a more business-like and performance-focused public sector. Using questionnaire responses from preparers of financial statements of public universities in Malaysia, this study examines the implementation challenges and benefits of principle-based accounting. Results from these responses suggest that most respondents perceived signific...

  7. Phase difference estimation method based on data extension and Hilbert transform

    International Nuclear Information System (INIS)

    Shen, Yan-lin; Tu, Ya-qing; Chen, Lin-jun; Shen, Ting-ao

    2015-01-01

    To improve the precision and anti-interference performance of phase difference estimation for non-integer periods of sampling signals, a phase difference estimation method based on data extension and Hilbert transform is proposed. Estimated phase difference is obtained by means of data extension, Hilbert transform, cross-correlation, auto-correlation, and weighted phase average. Theoretical analysis shows that the proposed method suppresses the end effects of Hilbert transform effectively. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of phase difference estimation and has better performance of phase difference estimation than the correlation, Hilbert transform, and data extension-based correlation methods, which contribute to improving the measurement precision of the Coriolis mass flowmeter. (paper)

  8. Margin estimation and disturbances of irradiation field in layer-stacking carbon-ion beams for respiratory moving targets.

    Science.gov (United States)

    Tajiri, Shinya; Tashiro, Mutsumi; Mizukami, Tomohiro; Tsukishima, Chihiro; Torikoshi, Masami; Kanai, Tatsuaki

    2017-11-01

    Carbon-ion therapy by layer-stacking irradiation for static targets has been practised in clinical treatments. In order to apply this technique to a moving target, disturbances of carbon-ion dose distributions due to respiratory motion have been studied based on the measurement using a respiratory motion phantom, and the margin estimation given by the square root of the summation Internal margin2+Setup margin2 has been assessed. We assessed the volume in which the variation in the ratio of the dose for a target moving due to respiration relative to the dose for a static target was within 5%. The margins were insufficient for use with layer-stacking irradiation of a moving target, and an additional margin was required. The lateral movement of a target converts to the range variation, as the thickness of the range compensator changes with the movement of the target. Although the additional margin changes according to the shape of the ridge filter, dose uniformity of 5% can be achieved for a spherical target 93 mm in diameter when the upward range variation is limited to 5 mm and the additional margin of 2.5 mm is applied in case of our ridge filter. Dose uniformity in a clinical target largely depends on the shape of the mini-peak as well as on the bolus shape. We have shown the relationship between range variation and dose uniformity. In actual therapy, the upper limit of target movement should be considered by assessing the bolus shape. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  9. I-MOVE multi-centre case control study 2010-11: overall and stratified estimates of influenza vaccine effectiveness in Europe.

    Directory of Open Access Journals (Sweden)

    Esther Kissling

    Full Text Available BACKGROUND: In the third season of I-MOVE (Influenza Monitoring Vaccine Effectiveness in Europe, we undertook a multicentre case-control study based on sentinel practitioner surveillance networks in eight European Union (EU member states to estimate 2010/11 influenza vaccine effectiveness (VE against medically-attended influenza-like illness (ILI laboratory-confirmed as influenza. METHODS: Using systematic sampling, practitioners swabbed ILI/ARI patients within seven days of symptom onset. We compared influenza-positive to influenza laboratory-negative patients among those meeting the EU ILI case definition. A valid vaccination corresponded to > 14 days between receiving a dose of vaccine and symptom onset. We used multiple imputation with chained equations to estimate missing values. Using logistic regression with study as fixed effect we calculated influenza VE adjusting for potential confounders. We estimated influenza VE overall, by influenza type, age group and among the target group for vaccination. RESULTS: We included 2019 cases and 2391 controls in the analysis. Adjusted VE was 52% (95% CI 30-67 overall (N = 4410, 55% (95% CI 29-72 against A(H1N1 and 50% (95% CI 14-71 against influenza B. Adjusted VE against all influenza subtypes was 66% (95% CI 15-86, 41% (95% CI -3-66 and 60% (95% CI 17-81 among those aged 0-14, 15-59 and ≥60 respectively. Among target groups for vaccination (N = 1004, VE was 56% (95% CI 34-71 overall, 59% (95% CI 32-75 against A(H1N1 and 63% (95% CI 31-81 against influenza B. CONCLUSIONS: Results suggest moderate protection from 2010-11 trivalent influenza vaccines against medically-attended ILI laboratory-confirmed as influenza across Europe. Adjusted and stratified influenza VE estimates are possible with the large sample size of this multi-centre case-control. I-MOVE shows how a network can provide precise summary VE measures across Europe.

  10. A CATALOG OF MOVING GROUP CANDIDATES IN THE SOLAR NEIGHBORHOOD

    International Nuclear Information System (INIS)

    Zhao Jingkun; Zhao Gang; Chen Yuqin

    2009-01-01

    Based on the kernel estimator and wavelet technique, we have identified 22 moving group candidates in the solar neighborhood from a sample which includes around 14,000 dwarfs and 6000 giants. Six of them were previously known as the Hercules stream, the Sirus-UMa stream, the Hyades stream, the Caster group, the Pleiades stream, and the IC 2391; five of them have also been reported by other authors. 11 moving group candidates, not previously reported in the literature, show prominent structures in dwarf or giant samples. A catalog of moving group candidates in the solar neighborhood is presented in this work.

  11. Estimated Daily Average Per Capita Water Ingestion by Child and Adult Age Categories Based on USDA's 1994-96 and 1998 Continuing Survey of Food Intakes by Individuals (Journal Article)

    Science.gov (United States)

    Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The a...

  12. Research on moving target defense based on SDN

    Science.gov (United States)

    Chen, Mingyong; Wu, Weimin

    2017-08-01

    An address mutation strategy was proposed. This strategy provided an unpredictable change in address, replacing the real address of the packet forwarding process and path mutation, thus hiding the real address of the host and path. a mobile object defense technology based on Spatio-temporal Mutation on this basis is proposed, Using the software Defined Network centralized control architecture advantage combines sFlow traffic monitoring technology and Moving Target Defense. A mutated time period which can be changed in real time according to the network traffic is changed, and the destination address is changed while the controller abruptly changes the address while the data packet is transferred between the switches to construct a moving target, confusing the host within the network, thereby protecting the host and network.

  13. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  14. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    Science.gov (United States)

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  15. Estimation of catchment averaged sensible heat fluxes using a large aperture scintillometer

    Directory of Open Access Journals (Sweden)

    Samain Bruno

    2012-05-01

    Full Text Available Evapotranspiration rates at the catchment scale are very difficult to quantify. One possible manner to continuously observe this variable could be the estimation of sensible heat fluxes (H across large distances (in the order of kilometers using a large aperture scintillometer (LAS, and inverting these observations into evapotranspiration rates, under the assumption that the LAS observations are representative for the entire catchment. The objective of this paper is to assess whether measured sensible heat fluxes from a LAS over a long distance (9.5 km can be assumed to be valid for a 102.3 km2 heterogeneous catchment. Therefore, a fully process-based water and energy balance model with a spatial resolution of 50 m has been thoroughly calibrated and validated for the Bellebeek catchmentin Belgium. A footprint analysis has been performed. In general, the sensible heat fluxes from the LAS compared well with the modeled sensible heat fluxes within the footprint. Moreover, as the modeled Hwithin the footprint has been found to be almost equal to the modeled catchment averaged H, it can be concluded that the scintillometer measurements over a distance of 9.5 km and an effective heightof 68 m are representative for the entire catchment.

  16. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  17. The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century.

    Science.gov (United States)

    Ochiai, Derek H; Adib, Farshad

    2015-04-01

    Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE.

  18. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  19. Value at risk estimation with entropy-based wavelet analysis in exchange markets

    Science.gov (United States)

    He, Kaijian; Wang, Lijun; Zou, Yingchao; Lai, Kin Keung

    2014-08-01

    In recent years, exchange markets are increasingly integrated together. Fluctuations and risks across different exchange markets exhibit co-moving and complex dynamics. In this paper we propose the entropy-based multivariate wavelet based approaches to analyze the multiscale characteristic in the multidimensional domain and improve further the Value at Risk estimation reliability. Wavelet analysis has been introduced to construct the entropy-based Multiscale Portfolio Value at Risk estimation algorithm to account for the multiscale dynamic correlation. The entropy measure has been proposed as the more effective measure with the error minimization principle to select the best basis when determining the wavelet families and the decomposition level to use. The empirical studies conducted in this paper have provided positive evidence as to the superior performance of the proposed approach, using the closely related Chinese Renminbi and European Euro exchange market.

  20. Estimation of Thermal Sensation Based on Wrist Skin Temperatures

    Science.gov (United States)

    Sim, Soo Young; Koh, Myung Jun; Joo, Kwang Min; Noh, Seungwoo; Park, Sangyun; Kim, Youn Ho; Park, Kwang Suk

    2016-01-01

    Thermal comfort is an essential environmental factor related to quality of life and work effectiveness. We assessed the feasibility of wrist skin temperature monitoring for estimating subjective thermal sensation. We invented a wrist band that simultaneously monitors skin temperatures from the wrist (i.e., the radial artery and ulnar artery regions, and upper wrist) and the fingertip. Skin temperatures from eight healthy subjects were acquired while thermal sensation varied. To develop a thermal sensation estimation model, the mean skin temperature, temperature gradient, time differential of the temperatures, and average power of frequency band were calculated. A thermal sensation estimation model using temperatures of the fingertip and wrist showed the highest accuracy (mean root mean square error [RMSE]: 1.26 ± 0.31). An estimation model based on the three wrist skin temperatures showed a slightly better result to the model that used a single fingertip skin temperature (mean RMSE: 1.39 ± 0.18). When a personalized thermal sensation estimation model based on three wrist skin temperatures was used, the mean RMSE was 1.06 ± 0.29, and the correlation coefficient was 0.89. Thermal sensation estimation technology based on wrist skin temperatures, and combined with wearable devices may facilitate intelligent control of one’s thermal environment. PMID:27023538

  1. A comparison of moving object detection methods for real-time moving object detection

    Science.gov (United States)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  2. On a Bayesian estimation procedure for determining the average ore grade of a uranium deposit

    International Nuclear Information System (INIS)

    Heising, C.D.; Zamora-Reyes, J.A.

    1996-01-01

    A Bayesian procedure is applied to estimate the average ore grade of a specific uranium deposit (the Morrison formation in New Mexico). Experimental data taken from drilling tests for this formation constitute deposit specific information, E 2 . This information is combined, through a single stage application of Bayes' theorem, with the more extensive and well established information on all similar formations in the region, E 1 . It is assumed that the best estimate for the deposit specific case should include the relevant experimental evidence collected from other like formations giving incomplete information on the specific deposit. This follows traditional methods for resource estimation, which presume that previous collective experience obtained from similar formations in the geological region can be used to infer the geologic characteristics of a less well characterized formation. (Author)

  3. Query and Update Efficient B+-Tree Based Indexing of Moving Objects

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lin, Dan; Ooi, Beng Chin

    2004-01-01

    . This motivates the design of a solution that enables the B+-tree to manage moving objects. We represent moving-object locations as vectors that are timestamped based on their update time. By applying a novel linearization technique to these values, it is possible to index the resulting values using a single B...... are streamed to a database. Indexes for moving objects must support queries efficiently, but must also support frequent updates. Indexes based on minimum bounding regions (MBRs) such as the R-tree exhibit high concurrency overheads during node splitting, and each individual update is known to be quite costly......+-tree that partitions values according to their timestamp and otherwise preserves spatial proximity. We develop algorithms for range and k nearest neighbor queries, as well as continuous queries. The proposal can be grafted into existing database systems cost effectively. An extensive experimental study explores...

  4. Concentration Sensing by the Moving Nucleus in Cell Fate Determination: A Computational Analysis.

    Directory of Open Access Journals (Sweden)

    Varun Aggarwal

    Full Text Available During development of the vertebrate neuroepithelium, the nucleus in neural progenitor cells (NPCs moves from the apex toward the base and returns to the apex (called interkinetic nuclear migration at which point the cell divides. The fate of the resulting daughter cells is thought to depend on the sampling by the moving nucleus of a spatial concentration profile of the cytoplasmic Notch intracellular domain (NICD. However, the nucleus executes complex stochastic motions including random waiting and back and forth motions, which can expose the nucleus to randomly varying levels of cytoplasmic NICD. How nuclear position can determine daughter cell fate despite the stochastic nature of nuclear migration is not clear. Here we derived a mathematical model for reaction, diffusion, and nuclear accumulation of NICD in NPCs during interkinetic nuclear migration (INM. Using experimentally measured trajectory-dependent probabilities of nuclear turning, nuclear waiting times and average nuclear speeds in NPCs in the developing zebrafish retina, we performed stochastic simulations to compute the nuclear trajectory-dependent probabilities of NPC differentiation. Comparison with experimentally measured nuclear NICD concentrations and trajectory-dependent probabilities of differentiation allowed estimation of the NICD cytoplasmic gradient. Spatially polarized production of NICD, rapid NICD cytoplasmic consumption and the time-averaging effect of nuclear import/export kinetics are sufficient to explain the experimentally observed differentiation probabilities. Our computational studies lend quantitative support to the feasibility of the nuclear concentration-sensing mechanism for NPC fate determination in zebrafish retina.

  5. Tracking of Multiple Moving Sources Using Recursive EM Algorithm

    Directory of Open Access Journals (Sweden)

    Böhme Johann F

    2005-01-01

    Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.

  6. Position Paper: Moving Task-Based Language Teaching Forward

    Science.gov (United States)

    Ellis, Rod

    2017-01-01

    The advocacy of task-based language teaching (TBLT) has met with resistance. The critiques of TBLT and the misconceptions that underlie them have already been addressed in Ellis (2009) and Long (2016). The purpose of this article is to move forward by examining a number of real problems that TBLT faces--such as how a "task" should be…

  7. Statistical comparison of models for estimating the monthly average daily diffuse radiation at a subtropical African site

    Energy Technology Data Exchange (ETDEWEB)

    Bashahu, M. [University of Burundi, Bujumbura (Burundi). Institute of Applied Pedagogy, Department of Physics and Technology

    2003-07-01

    Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H{sub d}) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K{sub d} and N/N{sub d}, Ne/8 or K{sub t}. Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)

  8. RSSI BASED LOCATION ESTIMATION IN A WI-FI ENVIRONMENT: AN EXPERIMENTAL STUDY

    Directory of Open Access Journals (Sweden)

    M. Ganesh Madhan

    2014-12-01

    Full Text Available In real life situations, location estimation of moving objects, armed personnel are of great importance. In this paper, we have attempted to locate targets which are mobile in a Wi-Fi environment. Radio Frequency (RF localization techniques based on Received Signal Strength Indication (RSSI algorithms are used. This study utilises Wireless Mon tool, software to provide complete technical information regarding received signal strength obtained from different wireless access points available in a campus Wi-Fi environment, considered for the study. All simulations have been done in MATLAB. The target location estimated by this approach agrees well with the actual GPS data.

  9. One method for life time estimation of a bucket wheel machine for coal moving

    Science.gov (United States)

    Vîlceanu, Fl; Iancu, C.

    2016-08-01

    Rehabilitation of outdated equipment with lifetime expired, or in the ultimate life period, together with high cost investments for their replacement, makes rational the efforts made to extend their life. Rehabilitation involves checking operational safety based on relevant expertise of metal structures supporting effective resistance and assessing the residual lifetime. The bucket wheel machine for coal constitute basic machine within deposits of coal of power plants. The estimate of remaining life can be done by checking the loading on the most stressed subassembly by Finite Element Analysis on a welding detail. The paper presents step-by-step the method of calculus applied in order to establishing the residual lifetime of a bucket wheel machine for coal moving using non-destructive methods of study (fatigue cracking analysis + FEA). In order to establish the actual state of machine and areas subject to study, was done FEA of this mining equipment, performed on the geometric model of mechanical analyzed structures, with powerful CAD/FEA programs. By applying the method it can be calculated residual lifetime, by extending the results from the most stressed area of the equipment to the entire machine, and thus saving time and money from expensive replacements.

  10. Estimation of Annual Average Soil Loss, Based on Rusle Model in Kallar Watershed, Bhavani Basin, Tamil Nadu, India

    Science.gov (United States)

    Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul

    2015-10-01

    Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  11. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  12. Neural networks prediction and fault diagnosis applied to stationary and non stationary ARMA (Autoregressive moving average) modeled time series

    International Nuclear Information System (INIS)

    Marseguerra, M.; Minoggio, S.; Rossi, A.; Zio, E.

    1992-01-01

    The correlated noise affecting many industrial plants under stationary or cyclo-stationary conditions - nuclear reactors included -has been successfully modeled by autoregressive moving average (ARMA) due to the versatility of this technique. The relatively recent neural network methods have similar features and much effort is being devoted to exploring their usefulness in forecasting and control. Identifying a signal by means of an ARMA model gives rise to the problem of selecting its correct order. Similar difficulties must be faced when applying neural network methods and, specifically, particular care must be given to the setting up of the appropriate network topology, the data normalization procedure and the learning code. In the present paper the capability of some neural networks of learning ARMA and seasonal ARMA processes is investigated. The results of the tested cases look promising since they indicate that the neural networks learn the underlying process with relative ease so that their forecasting capability may represent a convenient fault diagnosis tool. (Author)

  13. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  14. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  15. Techniques for Efficient Tracking of Road-Network-Based Moving Objects

    DEFF Research Database (Denmark)

    Civilis, Alminas; Jensen, Christian Søndergaard; Saltenis, Simonas

    With the continued advances in wireless communications, geo-positioning, and consumer electronics, an infrastructure is emerging that enables location-based services that rely on the tracking of the continuously changing positions of entire populations of service users, termed moving objects....... The main issue considered is how to represent the location of a moving object in a database so that tracking can be done with as few updates as possible. The paper proposes to use the road network within which the objects are assumed to move for predicting their future positions. The paper presents...... algorithms that modify an initial road-network representation, so that it works better as a basis for predicting an object's position; it proposes to use known movement patterns of the object, in the form of routes; and it proposes to use acceleration profiles together with the routes. Using real GPS...

  16. Techniques for efficient road-network-based tracking of moving objects

    DEFF Research Database (Denmark)

    Civilis, A.; Jensen, Christian Søndergaard; Pakalnis, Stardas

    2005-01-01

    With the continued advances in wireless communications, geo-positioning, and consumer electronics, an infrastructure is emerging that enables location-based services that rely on the tracking of the continuously changing positions of entire populations of service users, termed moving objects....... The main issue considered is how to represent the location of a moving object in a database so that tracking can be done with as few updates as possible. The paper proposes to use the road network within which the objects are assumed to move for predicting their future positions. The paper presents...... algorithms that modify an initial road-network representation, so that it works better as a basis for predicting an object's position; it proposes to use known movement patterns of the object, in the form of routes; and it proposes to use acceleration profiles together with the routes. Using real GPS...

  17. Human rights literacy: Moving towards rights-based education and ...

    African Journals Online (AJOL)

    Our theoretical framework examines the continual process of moving towards an open and democratic society through the facilitation of human rights literacy, rights-based education and transformative action. We focus specifically on understandings of dignity, equality and freedom, as both rights (legal claims) and values ...

  18. Controllability for a Wave Equation with Moving Boundary

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available We investigate the controllability for a one-dimensional wave equation in domains with moving boundary. This model characterizes small vibrations of a stretched elastic string when one of the two endpoints varies. When the speed of the moving endpoint is less than 1-1/e, by Hilbert uniqueness method, sidewise energy estimates method, and multiplier method, we get partial Dirichlet boundary controllability. Moreover, we will give a sharper estimate on controllability time that only depends on the speed of the moving endpoint.

  19. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  20. THE VELOCITY DISTRIBUTION OF NEARBY STARS FROM HIPPARCOS DATA. II. THE NATURE OF THE LOW-VELOCITY MOVING GROUPS

    International Nuclear Information System (INIS)

    Bovy, Jo; Hogg, David W.

    2010-01-01

    The velocity distribution of nearby stars (∼<100 pc) contains many overdensities or 'moving groups', clumps of comoving stars, that are inconsistent with the standard assumption of an axisymmetric, time-independent, and steady-state Galaxy. We study the age and metallicity properties of the low-velocity moving groups based on the reconstruction of the local velocity distribution in Paper I of this series. We perform stringent, conservative hypothesis testing to establish for each of these moving groups whether it could conceivably consist of a coeval population of stars. We conclude that they do not: the moving groups are neither trivially associated with their eponymous open clusters nor with any other inhomogeneous star formation event. Concerning a possible dynamical origin of the moving groups, we test whether any of the moving groups has a higher or lower metallicity than the background population of thin disk stars, as would generically be the case if the moving groups are associated with resonances of the bar or spiral structure. We find clear evidence that the Hyades moving group has higher than average metallicity and weak evidence that the Sirius moving group has lower than average metallicity, which could indicate that these two groups are related to the inner Lindblad resonance of the spiral structure. Further, we find weak evidence that the Hercules moving group has higher than average metallicity, as would be the case if it is associated with the bar's outer Lindblad resonance. The Pleiades moving group shows no clear metallicity anomaly, arguing against a common dynamical origin for the Hyades and Pleiades groups. Overall, however, the moving groups are barely distinguishable from the background population of stars, raising the likelihood that the moving groups are associated with transient perturbations.

  1. Robust preprocessing for stimulus-based functional MRI of the moving fetus.

    Science.gov (United States)

    You, Wonsang; Evangelou, Iordanis E; Zun, Zungho; Andescavage, Nickie; Limperopoulos, Catherine

    2016-04-01

    Fetal motion manifests as signal degradation and image artifact in the acquired time series of blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) studies. We present a robust preprocessing pipeline to specifically address fetal and placental motion-induced artifacts in stimulus-based fMRI with slowly cycled block design in the living fetus. In the proposed pipeline, motion correction is optimized to the experimental paradigm, and it is performed separately in each phase as well as in each region of interest (ROI), recognizing that each phase and organ experiences different types of motion. To obtain the averaged BOLD signals for each ROI, both misaligned volumes and noisy voxels are automatically detected and excluded, and the missing data are then imputed by statistical estimation based on local polynomial smoothing. Our experimental results demonstrate that the proposed pipeline was effective in mitigating the motion-induced artifacts in stimulus-based fMRI data of the fetal brain and placenta.

  2. Attenuation correction for freely moving small animal brain PET studies based on a virtual scanner geometry

    International Nuclear Information System (INIS)

    Angelis, G I; Kyme, A Z; Ryder, W J; Fulton, R R; Meikle, S R

    2014-01-01

    Attenuation correction in positron emission tomography brain imaging of freely moving animals is a very challenging problem since the torso of the animal is often within the field of view and introduces a non negligible attenuating factor that can degrade the quantitative accuracy of the reconstructed images. In the context of unrestrained small animal imaging, estimation of the attenuation correction factors without the need for a transmission scan is highly desirable. An attractive approach that avoids the need for a transmission scan involves the generation of the hull of the animal’s head based on the reconstructed motion corrected emission images. However, this approach ignores the attenuation introduced by the animal’s torso. In this work, we propose a virtual scanner geometry which moves in synchrony with the animal’s head and discriminates between those events that traversed only the animal’s head (and therefore can be accurately compensated for attenuation) and those that might have also traversed the animal’s torso. For each recorded pose of the animal’s head a new virtual scanner geometry is defined and therefore a new system matrix must be calculated leading to a time-varying system matrix. This new approach was evaluated on phantom data acquired on the microPET Focus 220 scanner using a custom-made phantom and step-wise motion. Results showed that when the animal’s torso is within the FOV and not appropriately accounted for during attenuation correction it can lead to bias of up to 10% . Attenuation correction was more accurate when the virtual scanner was employed leading to improved quantitative estimates (bias < 2%), without the need to account for the attenuation introduced by the extraneous compartment. Although the proposed method requires increased computational resources, it can provide a reliable approach towards quantitatively accurate attenuation correction for freely moving animal studies. (paper)

  3. Plans, Patterns, and Move Categories Guiding a Highly Selective Search

    Science.gov (United States)

    Trippen, Gerhard

    In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.

  4. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  5. Role of moving planes and moving spheres following Dupin cyclides

    KAUST Repository

    Jia, Xiaohong

    2014-03-01

    We provide explicit representations of three moving planes that form a μ-basis for a standard Dupin cyclide. We also show how to compute μ-bases for Dupin cyclides in general position and orientation from their implicit equations. In addition, we describe the role of moving planes and moving spheres in bridging between the implicit and rational parametric representations of these cyclides. © 2014 Elsevier B.V.

  6. Role of moving planes and moving spheres following Dupin cyclides

    KAUST Repository

    Jia, Xiaohong

    2014-01-01

    We provide explicit representations of three moving planes that form a μ-basis for a standard Dupin cyclide. We also show how to compute μ-bases for Dupin cyclides in general position and orientation from their implicit equations. In addition, we describe the role of moving planes and moving spheres in bridging between the implicit and rational parametric representations of these cyclides. © 2014 Elsevier B.V.

  7. Depth-Based Detection of Standing-Pigs in Moving Noise Environments

    Directory of Open Access Journals (Sweden)

    Jinseong Kim

    2017-11-01

    Full Text Available In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with “moving noises”, which appear every night in a commercial pig farm, but have not been reported yet. We first apply a spatiotemporal interpolation technique to remove the moving noises occurring in the depth images. Then, we detect the standing-pigs by utilizing the undefined depth values around them. Our experimental results show that this method is effective for detecting standing-pigs at night, in terms of both cost-effectiveness (using a low-cost Kinect depth sensor and accuracy (i.e., 94.47%, even with severe moving noises occluding up to half of an input depth image. Furthermore, without any time-consuming technique, the proposed method can be executed in real-time.

  8. Gaussian particle filter based pose and motion estimation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Determination of relative three-dimensional (3D) position, orientation, and relative motion between two reference frames is an important problem in robotic guidance, manipulation, and assembly as well as in other fields such as photogrammetry.A solution to pose and motion estimation problem that uses two-dimensional (2D) intensity images from a single camera is desirable for real-time applications. The difficulty in performing this measurement is that the process of projecting 3D object features to 2D images is a nonlinear transformation. In this paper, the 3D transformation is modeled as a nonlinear stochastic system with the state estimation providing six degrees-of-freedom motion and position values, using line features in image plane as measuring inputs and dual quaternion to represent both rotation and translation in a unified notation. A filtering method called the Gaussian particle filter (GPF) based on the particle filtering concept is presented for 3D pose and motion estimation of a moving target from monocular image sequences. The method has been implemented with simulated data, and simulation results are provided along with comparisons to the extended Kalman filter (EKF) and the unscented Kalman filter (UKF) to show the relative advantages of the GPF. Simulation results showed that GPF is a superior alternative to EKF and UKF.

  9. Benefits of dominance over additive models for the estimation of average effects in the presence of dominance

    NARCIS (Netherlands)

    Duenk, Pascal; Calus, Mario P.L.; Wientjes, Yvonne C.J.; Bijma, Piter

    2017-01-01

    In quantitative genetics, the average effect at a single locus can be estimated by an additive (A) model, or an additive plus dominance (AD) model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and

  10. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  11. A Simple and Robust Sliding Mode Velocity Observer for Moving Coil Actuators in Digital Hydraulic Valves

    DEFF Research Database (Denmark)

    Nørgård, Christian; Schmidt, Lasse; Bech, Michael Møller

    2016-01-01

    This paper focuses on estimating the velocity and position of fast switching digital hydraulic valves actuated by electromagnetic moving coil actuators, based on measurements of the coil current and voltage. The velocity is estimated by a simple first-order sliding mode observer architecture...... and the position is estimated by integrating the estimated velocity. The binary operation of digi-valves enables limiting and resetting the position estimate since the moving member is switched between the mechanical end-stops of the valve. This enables accurate tracking since drifting effects due to measurement...... noise and integration of errors in the velocity estimate may be circumvented. The proposed observer architecture is presented along with stability proofs and initial experimental results. To reveal the optimal observer performance, an optimization of the observer parameters is carried out. Subsequently...

  12. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  13. I Move: systematic development of a web-based computer tailored physical activity intervention, based on motivational interviewing and self-determination theory

    Science.gov (United States)

    2014-01-01

    Background This article describes the systematic development of the I Move intervention: a web-based computer tailored physical activity promotion intervention, aimed at increasing and maintaining physical activity among adults. This intervention is based on the theoretical insights and practical applications of self-determination theory and motivational interviewing. Methods/design Since developing interventions in a systemically planned way increases the likelihood of effectiveness, we used the Intervention Mapping protocol to develop the I Move intervention. In this article, we first describe how we proceeded through each of the six steps of the Intervention Mapping protocol. After that, we describe the content of the I Move intervention and elaborate on the planned randomized controlled trial. Discussion By integrating self-determination theory and motivational interviewing in web-based computer tailoring, the I Move intervention introduces a more participant-centered approach than traditional tailored interventions. Adopting this approach might enhance computer tailored physical activity interventions both in terms of intervention effectiveness and user appreciation. We will evaluate this in an randomized controlled trial, by comparing the I Move intervention to a more traditional web-based computer tailored intervention. Trial registration NTR4129 PMID:24580802

  14. A new algorithm for recursive estimation of ARMA parameters in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1992-01-01

    In this paper a new recursive algorithm for estimating the parameters of the Autoregressive Moving Average (ARMA) model from measured data is presented. The Yule-Walker equations for the case of the ARMA model are derived from the ARMA equation with innovations. The recursive algorithm is based on choosing an appropriate form of the operator functions and suitable representation of the (n + 1)-th order operator functions according to those with lower order. Two cases, when the order of the AR part is equal to that of the MA part, and the general case, were considered. (Author)

  15. Estimation of daytime net ecosystem CO2 exchange over balsam fir forests in eastern Canada : combining averaged tower-based flux measurements with remotely sensed MODIS data

    International Nuclear Information System (INIS)

    Hassan, Q.K.; Bourque, C.P.A.; Meng, F-R.

    2006-01-01

    Considerable attention has been placed on the unprecedented increases in atmospheric carbon dioxide (CO 2 ) emissions and associated changes in global climate change. This article developed a practical approach for estimating daytime net CO 2 fluxes generated over balsam fir dominated forest ecosystems in the Atlantic Maritime ecozone of eastern Canada. The study objectives were to characterize the light use efficiency and ecosystem respiration for young to intermediate-aged balsam fir forest ecosystems in New Brunswick; relate tower-based measurements of daytime net ecosystem exchange (NEE) to absorbed photosynthetically active radiation (APAR); use a digital elevation model of the province to enhance spatial calculations of daily photosynthetically active radiation and APAR under cloud-free conditions; and generate a spatial calculation of daytime NEE for a balsam fir dominated region in northwestern New Brunswick. The article identified the study area and presented the data requirements and methodology. It was shown that the seasonally averaged daytime NEE and APAR values are strongly correlated. 36 refs., 2 tabs., 10 figs

  16. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  17. Seasonal adjustment methods and real time trend-cycle estimation

    CERN Document Server

    Bee Dagum, Estela

    2016-01-01

    This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...

  18. Evaluating and improving count-based population inference: A case study from 31 years of monitoring Sandhill Cranes

    Science.gov (United States)

    Gerber, Brian D.; Kendall, William L.

    2017-01-01

    Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4

  19. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  20. Timescale Halo: Average-Speed Targets Elicit More Positive and Less Negative Attributions than Slow or Fast Targets

    Science.gov (United States)

    Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin

    2014-01-01

    Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882

  1. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  2. Uncertainties of estimating average radon and radon decay product concentrations in occupied houses

    International Nuclear Information System (INIS)

    Ronca-Battista, M.; Magno, P.; Windham, S.

    1986-01-01

    Radon and radon decay product measurements made in up to 68 Butte, Montana homes over a period of 18 months were used to estimate the uncertainty in estimating long-term average radon and radon decay product concentrations from a short-term measurement. This analysis was performed in support of the development of radon and radon decay product measurement protocols by the Environmental Protection Agency (EPA). The results of six measurement methods were analyzed: continuous radon and working level monitors, radon progeny integrating sampling units, alpha-track detectors, and grab radon and radon decay product techniques. Uncertainties were found to decrease with increasing sampling time and to be smaller when measurements were conducted during the winter months. In general, radon measurements had a smaller uncertainty than radon decay product measurements. As a result of this analysis, the EPA measurements protocols specify that all measurements be made under closed-house (winter) conditions, and that sampling times of at least a 24 hour period be used when the measurement will be the basis for a decision about remedial action or long-term health risks. 13 references, 3 tables

  3. Moving State Marine SINS Initial Alignment Based on High Degree CKF

    Directory of Open Access Journals (Sweden)

    Yong-Gang Zhang

    2014-01-01

    Full Text Available A new moving state marine initial alignment method of strap-down inertial navigation system (SINS is proposed based on high-degree cubature Kalman filter (CKF, which can capture higher order Taylor expansion terms of nonlinear alignment model than the existing third-degree CKF, unscented Kalman filter and central difference Kalman filter, and improve the accuracy of initial alignment under large heading misalignment angle condition. Simulation results show the efficiency and advantage of the proposed initial alignment method as compared with existing initial alignment methods for the moving state SINS initial alignment with large heading misalignment angle.

  4. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  5. Moving target detection based on temporal-spatial information fusion for infrared image sequences

    Science.gov (United States)

    Toing, Wu-qin; Xiong, Jin-yu; Zeng, An-jun; Wu, Xiao-ping; Xu, Hao-peng

    2009-07-01

    Moving target detection and localization is one of the most fundamental tasks in visual surveillance. In this paper, through analyzing the advantages and disadvantages of the traditional approaches about moving target detection, a novel approach based on temporal-spatial information fusion is proposed for moving target detection. The proposed method combines the spatial feature in single frame and the temporal properties within multiple frames of an image sequence of moving target. First, the method uses the spatial image segmentation for target separation from background and uses the local temporal variance for extracting targets and wiping off the trail artifact. Second, the logical "and" operator is used to fuse the temporal and spatial information. In the end, to the fusion image sequence, the morphological filtering and blob analysis are used to acquire exact moving target. The algorithm not only requires minimal computation and memory but also quickly adapts to the change of background and environment. Comparing with other methods, such as the KDE, the Mixture of K Gaussians, etc., the simulation results show the proposed method has better validity and higher adaptive for moving target detection, especially in infrared image sequences with complex illumination change, noise change, and so on.

  6. Target Tracking in 3-D Using Estimation Based Nonlinear Control Laws for UAVs

    Directory of Open Access Journals (Sweden)

    Mousumi Ahmed

    2016-02-01

    Full Text Available This paper presents an estimation based backstepping like control law design for an Unmanned Aerial Vehicle (UAV to track a moving target in 3-D space. A ground-based sensor or an onboard seeker antenna provides range, azimuth angle, and elevation angle measurements to a chaser UAV that implements an extended Kalman filter (EKF to estimate the full state of the target. A nonlinear controller then utilizes this estimated target state and the chaser’s state to provide speed, flight path, and course/heading angle commands to the chaser UAV. Tracking performance with respect to measurement uncertainty is evaluated for three cases: (1 stationary white noise; (2 stationary colored noise and (3 non-stationary (range correlated white noise. Furthermore, in an effort to improve tracking performance, the measurement model is made more realistic by taking into consideration range-dependent uncertainties in the measurements, i.e., as the chaser closes in on the target, measurement uncertainties are reduced in the EKF, thus providing the UAV with more accurate control commands. Simulation results for these cases are shown to illustrate target state estimation and trajectory tracking performance.

  7. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  8. Wavelet-based spectral finite element dynamic analysis for an axially moving Timoshenko beam

    Science.gov (United States)

    Mokhtari, Ali; Mirdamadi, Hamid Reza; Ghayour, Mostafa

    2017-08-01

    In this article, wavelet-based spectral finite element (WSFE) model is formulated for time domain and wave domain dynamic analysis of an axially moving Timoshenko beam subjected to axial pretension. The formulation is similar to conventional FFT-based spectral finite element (SFE) model except that Daubechies wavelet basis functions are used for temporal discretization of the governing partial differential equations into a set of ordinary differential equations. The localized nature of Daubechies wavelet basis functions helps to rule out problems of SFE model due to periodicity assumption, especially during inverse Fourier transformation and back to time domain. The high accuracy of WSFE model is then evaluated by comparing its results with those of conventional finite element and SFE results. The effects of moving beam speed and axial tensile force on vibration and wave characteristics, and static and dynamic stabilities of moving beam are investigated.

  9. Scene depth estimation using a moving camera

    International Nuclear Information System (INIS)

    Sune, Jean-Luc

    1995-01-01

    This thesis presents a solution of the depth-from-motion problem. The movement of the monocular observer is known. We have focused our research on a direct method which avoid the optical flow estimation required by classical approaches. The direct application of this method is not exploitable. We need to define a validity domain to extract the set of image points where it is possible to get a correct depth value. Also, we use a multi-scale approach to improve the derivatives estimation. The depth estimation for a given scale is obtained by the minimisation of an energy function established in the context of statistic regularization. A fusion operator, merging the various spatial and temporal scales, has been used to estimate the final depth map. A correction-prediction schema is used to integrate the temporal information from an image sequence. The predicted depth map is considered as an additional observation and integrated in the fusion process. At each time, an error depth map is associated to the estimated depth map. (author) [fr

  10. Effect of Broadband Nature of Marine Mammal Echolocation Clicks on Click-Based Population Density Estimates

    Science.gov (United States)

    2014-09-30

    No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...will be applied also to other species such as sperm whale (Physeter macrocephalus) (whose high source level assures long range detection and amplifies...improve the accuracy of marine mammal density estimation based on counting echolocation clicks, and will be applicable to density estimates obtained

  11. A stepwise validation of a wearable system for estimating energy expenditure in field-based research

    International Nuclear Information System (INIS)

    Rumo, Martin; Mäder, Urs; Amft, Oliver; Tröster, Gerhard

    2011-01-01

    Regular physical activity (PA) is an important contributor to a healthy lifestyle. Currently, standard sensor-based methods to assess PA in field-based research rely on a single accelerometer mounted near the body's center of mass. This paper introduces a wearable system that estimates energy expenditure (EE) based on seven recognized activity types. The system was developed with data from 32 healthy subjects and consists of a chest mounted heart rate belt and two accelerometers attached to a thigh and dominant upper arm. The system was validated with 12 other subjects under restricted lab conditions and simulated free-living conditions against indirect calorimetry, as well as in subjects' habitual environments for 2 weeks against the doubly labeled water method. Our stepwise validation methodology gradually trades reference information from the lab against realistic data from the field. The average accuracy for EE estimation was 88% for restricted lab conditions, 55% for simulated free-living conditions and 87% and 91% for the estimation of average daily EE over the period of 1 and 2 weeks

  12. Moving event and moving participant in aspectual conceptions

    Directory of Open Access Journals (Sweden)

    Izutsu Katsunobu

    2016-06-01

    Full Text Available This study advances an analysis of the event conception of aspectual forms in four East Asian languages: Ainu, Japanese, Korean, and Ryukyuan. As earlier studies point out, event conceptions can be divided into two major types: the moving-event type and the moving-participant type, respectively. All aspectual forms in Ainu and Korean, and most forms in Japanese and Ryukyuan are based on that type of event conception. Moving-participant oriented Ainu and movingevent oriented Japanese occupy two extremes, between which Korean and Ryukyuan stand. Notwithstanding the geographical relationships among the four languages, Ryukyuan is closer to Ainu than to Korean, whereas Korean is closer to Ainu than to Japanese.

  13. SU-E-I-08: An Inpaint-Based Interpolation Technique to Recover Blocked Information for Cone Beam CT with a Synchronized Moving Grid (SMOG)

    International Nuclear Information System (INIS)

    Kong, V; Zhang, H; Jin, J; Ren, L

    2014-01-01

    Purpose: Synchronized moving grid (SMOG) is a promising technique to reduce scatter and ghost artifacts in cone beam computed tomography (CBCT). However, the grid blocks part of image information in each projection, and multiple projections at the same gantry angle have to been taken to obtain full information. Because of the continuity of a patient's anatomy in the projection, the blocked information may be estimated by interpolation. This study aims to evaluate an inpainting-based interpolation approach to recover the missing information for CBCT reconstruction. Method: We used a simple region-based inpainting approach to interpolate the missing information. For a pixel to be interpolated, we divided the nearby regions having image information into 6 sub-regions: up-left, up-middle, up-right, down-left, down-middle, and down-right, each with 9 pixels. The average pixel value of each sub-region was calculated. These average values, along with the pixel location, were used to determine the interpolated pixel value. We compared our approach with the Criminisi Exemplar (CE) and total variation (TV) based inpainting techniques. Projection images of Catphan and a head phantom were used for the comparison. The SMOG was simulated by erasing the information (filling with “0”) of the areas in each projection corresponding to the grid. Results: For the Catphan, the processing time was 178, 45 and 0.98 minutes for CE, TV and our approach, respectively. The signal to noise ratio (SNR) was 19.4, 18.5 and 26.4 db, correspondingly. For the head phantom, the processing time was 222, 45 and 0.93 minutes for CE, TV and our approach, respectively. The SNR was 24.6, 20.2 and 26.2db correspondingly. Conclusion: We have developed a simple inpainting based interpolation approach, which can recover some of the image information for the SMOG-based CBCT imaging. This study is supported by NIH/NCI grant 1R01CA166948-01

  14. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  15. MOVES: A knowledge-based system for maintenance planning for motor-operated valves

    International Nuclear Information System (INIS)

    Winter, M.

    1987-01-01

    Over the past several years, knowledge-based expert systems have emerged as an important part of the general research area known as artificial intelligence. This paper describes a cooperative effort between faculty members at Iowa State University and engineers at the Duane Arnold Energy Center [a 545-MW(electric) boiling water reactor operated by Iowa Electric Light and Power Company] to explore the development of an advisory system for valve maintenance planning. This knowledge-based program, known as Motor-Operated Valves Expert System (MOVES), has a data base that currently includes safety-related motor-operated valves (∼117 valves). Valve maintenance was selected as the subject for the expert system because of the importance of valves in nuclear plant and their impact of plant availability. MOVES is being developed using the microcomputer-(IBM compatible) based expert system tool INSIGHT2+. The authors have found that the project benefits both the university and the utility

  16. Evaluation of geostatistical parameters based on well tests; Estimation de parametres geostatistiques a partir de tests de puits

    Energy Technology Data Exchange (ETDEWEB)

    Gauthier, Y.

    1997-10-20

    Geostatistical tools are increasingly used to model permeability fields in subsurface reservoirs, which are considered as a particular random variable development depending of several geostatistical parameters such as variance and correlation length. The first part of the thesis is devoted to the study of relations existing between the transient well pressure (the well test) and the stochastic permeability field, using the apparent permeability concept.The well test performs a moving permeability average over larger and larger volume with increasing time. In the second part, the geostatistical parameters are evaluated using well test data; a Bayesian framework is used and parameters are estimated using the maximum likelihood principle by maximizing the well test data probability density function with respect to these parameters. This method, involving a well test fast evaluation, provides an estimation of the correlation length and the variance over different realizations of a two-dimensional permeability field

  17. Driving-forces model on individual behavior in scenarios considering moving threat agents

    Science.gov (United States)

    Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia

    2017-09-01

    The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.

  18. Cost-Sensitive Estimation of ARMA Models for Financial Asset Return Data

    Directory of Open Access Journals (Sweden)

    Minyoung Kim

    2015-01-01

    Full Text Available The autoregressive moving average (ARMA model is a simple but powerful model in financial engineering to represent time-series with long-range statistical dependency. However, the traditional maximum likelihood (ML estimator aims to minimize a loss function that is inherently symmetric due to Gaussianity. The consequence is that when the data of interest are asset returns, and the main goal is to maximize profit by accurate forecasting, the ML objective may be less appropriate potentially leading to a suboptimal solution. Rather, it is more reasonable to adopt an asymmetric loss where the model's prediction, as long as it is in the same direction as the true return, is penalized less than the prediction in the opposite direction. We propose a quite sensible asymmetric cost-sensitive loss function and incorporate it into the ARMA model estimation. On the online portfolio selection problem with real stock return data, we demonstrate that the investment strategy based on predictions by the proposed estimator can be significantly more profitable than the traditional ML estimator.

  19. Moving object detection in video satellite image based on deep learning

    Science.gov (United States)

    Zhang, Xueyang; Xiang, Junhua

    2017-11-01

    Moving object detection in video satellite image is studied. A detection algorithm based on deep learning is proposed. The small scale characteristics of remote sensing video objects are analyzed. Firstly, background subtraction algorithm of adaptive Gauss mixture model is used to generate region proposals. Then the objects in region proposals are classified via the deep convolutional neural network. Thus moving objects of interest are detected combined with prior information of sub-satellite point. The deep convolution neural network employs a 21-layer residual convolutional neural network, and trains the network parameters by transfer learning. Experimental results about video from Tiantuo-2 satellite demonstrate the effectiveness of the algorithm.

  20. Prediction of Tourist Arrivals to the Island of Bali with Holt Method of Winter and Seasonal Autoregressive Integrated Moving Average (SARIMA

    Directory of Open Access Journals (Sweden)

    Agus Supriatna

    2017-11-01

    Full Text Available The tourism sector is one of the contributors of foreign exchange is quite influential in improving the economy of Indonesia. The development of this sector will have a positive impact, including employment opportunities and opportunities for entrepreneurship in various industries such as adventure tourism, craft or hospitality. The beauty and natural resources owned by Indonesia become a tourist attraction for domestic and foreign tourists. One of the many tourist destination is the island of Bali. The island of Bali is not only famous for its natural, cultural diversity and arts but there are also add the value of tourism. In 2015 the increase in the number of tourist arrivals amounted to 6.24% from the previous year. In improving the quality of services, facing a surge of visitors, or prepare a strategy in attracting tourists need a prediction of arrival so that planning can be more efficient and effective. This research used  Holt Winter's method and Seasonal Autoregressive Integrated Moving Average (SARIMA method  to predict tourist arrivals. Based on data of foreign tourist arrivals who visited the Bali island in January 2007 until June 2016, the result of Holt Winter's method with parameter values α=0.1 ,β=0.1 ,γ=0.3 has an error MAPE is 6,171873. While the result of SARIMA method with (0,1,1〖(1,0,0〗12 model has an error MAPE is 5,788615 and it can be concluded that SARIMA method is better. Keywords: Foreign Tourist, Prediction, Bali Island, Holt-Winter’s, SARIMA.

  1. Lithium-ion Battery Degradation Assessment and Remaining Useful Life Estimation in Hybrid Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Nabil Laayouj

    2016-06-01

    Full Text Available Abstract—Prognostic activity deals with prediction of the remaining useful life (RUL of physical systems based on their actual health state and their usage conditions. RUL estimation gives operators a potent tool in decision making by quantifying how much time is left until functionality is lost. In addition, it can be used to improve the characterization of the material proprieties that govern damage propagation for the structure being monitored. RUL can be estimated by using three main approaches, namely model-based, data-driven and hybrid approaches. The prognostics methods used later in this paper are hybrid and data-driven approaches, which employ the Particle Filter in the first one and the autoregressive integrated moving average in the second. The performance of the suggested approaches is evaluated in a comparative study on data collected from lithium-ion battery of hybrid electric vehicle.

  2. Robust Detection of Moving Human Target in Foliage-Penetration Environment Based on Hough Transform

    Directory of Open Access Journals (Sweden)

    P. Lei

    2014-04-01

    Full Text Available Attention has been focused on the robust moving human target detection in foliage-penetration environment, which presents a formidable task in a radar system because foliage is a rich scattering environment with complex multipath propagation and time-varying clutter. Generally, multiple-bounce returns and clutter are additionally superposed to direct-scatter echoes. They obscure true target echo and lead to poor visual quality time-range image, making target detection particular difficult. Consequently, an innovative approach is proposed to suppress clutter and mitigate multipath effects. In particular, a clutter suppression technique based on range alignment is firstly applied to suppress the time-varying clutter and the instable antenna coupling. Then entropy weighted coherent integration (EWCI algorithm is adopted to mitigate the multipath effects. In consequence, the proposed method effectively reduces the clutter and ghosting artifacts considerably. Based on the high visual quality image, the target trajectory is detected robustly and the radial velocity is estimated accurately with the Hough transform (HT. Real data used in the experimental results are provided to verify the proposed method.

  3. Estimating the orientation of a rigid body moving in space using inertial sensors

    Energy Technology Data Exchange (ETDEWEB)

    He, Peng, E-mail: peng.he.1@ulaval.ca; Cardou, Philippe, E-mail: pcardou@gmc.ulaval.ca [Université Laval, Robotics Laboratory, Department of Mechanical Engineering (Canada); Desbiens, André, E-mail: andre.desbiens@gel.ulaval.ca [Université Laval, Department of Electrical and Computer Engineering (Canada); Gagnon, Eric, E-mail: Eric.Gagnon@drdc-rddc.gc.ca [RDDC Valcartier (Canada)

    2015-09-15

    This paper presents a novel method of estimating the orientation of a rigid body moving in space from inertial sensors, by discerning the gravitational and inertial components of the accelerations. In this method, both a rigid-body kinematics model and a stochastic model of the human-hand motion are formulated and combined in a nonlinear state-space system. The state equation represents the rigid body kinematics and stochastic model, and the output equation represents the inertial sensor measurements. It is necessary to mention that, since the output equation is a nonlinear function of the state, the extended Kalman filter (EKF) is applied. The absolute value of the error from the proposed method is shown to be less than 5 deg in simulation and in experiments. It is apparently stable, unlike the time-integration of gyroscope measurements, which is subjected to drift, and remains accurate under large accelerations, unlike the tilt-sensor method.

  4. Estimating the orientation of a rigid body moving in space using inertial sensors

    International Nuclear Information System (INIS)

    He, Peng; Cardou, Philippe; Desbiens, André; Gagnon, Eric

    2015-01-01

    This paper presents a novel method of estimating the orientation of a rigid body moving in space from inertial sensors, by discerning the gravitational and inertial components of the accelerations. In this method, both a rigid-body kinematics model and a stochastic model of the human-hand motion are formulated and combined in a nonlinear state-space system. The state equation represents the rigid body kinematics and stochastic model, and the output equation represents the inertial sensor measurements. It is necessary to mention that, since the output equation is a nonlinear function of the state, the extended Kalman filter (EKF) is applied. The absolute value of the error from the proposed method is shown to be less than 5 deg in simulation and in experiments. It is apparently stable, unlike the time-integration of gyroscope measurements, which is subjected to drift, and remains accurate under large accelerations, unlike the tilt-sensor method

  5. Averaged Propulsive Body Acceleration (APBA Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions.

    Directory of Open Access Journals (Sweden)

    Colin Ware

    Full Text Available Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion-an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus, and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA using a method incorporating gyroscope data with accelerometer data. We derived a new metric-Averaged Propulsive Body Acceleration (APBA, which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal

  6. State-space dynamic model for estimation of radon entry rate, based on Kalman filtering

    International Nuclear Information System (INIS)

    Brabec, Marek; Jilek, Karel

    2007-01-01

    To predict the radon concentration in a house environment and to understand the role of all factors affecting its behavior, it is necessary to recognize time variation in both air exchange rate and radon entry rate into a house. This paper describes a new approach to the separation of their effects, which effectively allows continuous estimation of both radon entry rate and air exchange rate from simultaneous tracer gas (carbon monoxide) and radon gas measurement data. It is based on a state-space statistical model which permits quick and efficient calculations. Underlying computations are based on (extended) Kalman filtering, whose practical software implementation is easy. Key property is the model's flexibility, so that it can be easily adjusted to handle various artificial regimens of both radon gas and CO gas level manipulation. After introducing the statistical model formally, its performance will be demonstrated on real data from measurements conducted in our experimental, naturally ventilated and unoccupied room. To verify our method, radon entry rate calculated via proposed statistical model was compared with its known reference value. The results from several days of measurement indicated fairly good agreement (up to 5% between reference value radon entry rate and its value calculated continuously via proposed method, in average). Measured radon concentration moved around the level approximately 600 Bq m -3 , whereas the range of air exchange rate was 0.3-0.8 (h -1 )

  7. A Web-Based Tool to Estimate Pollutant Loading Using LOADEST

    Directory of Open Access Journals (Sweden)

    Youn Shik Park

    2015-09-01

    Full Text Available Collecting and analyzing water quality samples is costly and typically requires significant effort compared to streamflow data, thus water quality data are typically collected at a low frequency. Regression models, identifying a relationship between streamflow and water quality data, are often used to estimate pollutant loads. A web-based tool using LOAD ESTimator (LOADEST as a core engine with four modules was developed to provide user-friendly interfaces and input data collection via web access. The first module requests and receives streamflow and water quality data from the U.S. Geological Survey. The second module retrieves watershed area for computation of pollutant loads per unit area. The third module examines potential error of input datasets for LOADEST runs, and the last module computes estimated and allowable annual average pollutant loads and provides tabular and graphical LOADEST outputs. The web-based tool was applied to two watersheds in this study, one agriculturally-dominated and one urban-dominated. It was found that annual sediment load at the urban-dominant watershed exceeded the target load; therefore, the web-based tool identified correctly the watershed requiring best management practices to reduce pollutant loads.

  8. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems

    Directory of Open Access Journals (Sweden)

    Tao Liu

    2017-02-01

    Full Text Available Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.

  9. A Nonadaptive Window-Based PLL for Single-Phase Applications

    DEFF Research Database (Denmark)

    Golestan, Saeed; Guerrero, Josep M.; Quintero, Juan Carlos Vasquez

    2018-01-01

    The rectangular window filter, typically known as the moving average filter (MAF), is a quasi-ideal low-pass filter that has found wide application in designing advanced single-phase phase-locked loops (PLLs). Most often, the MAF is employed as an in-loop filter within the control loop of the sin......The rectangular window filter, typically known as the moving average filter (MAF), is a quasi-ideal low-pass filter that has found wide application in designing advanced single-phase phase-locked loops (PLLs). Most often, the MAF is employed as an in-loop filter within the control loop...... response is avoided. Nevertheless, the PLL implementation complexity considerably increases as MAFs are frequency-adaptive and, therefore, they require an additional frequency detector for estimating the grid frequency. To reduce the implementation complexity while maintaining a good performance, using...... a nonadaptive MAF-based QSG with some error compensators is suggested in this letter. The effectiveness of the resultant PLL, which is briefly called the nonadaptive MAF-based PLL, is verified using experimental results....

  10. Imaging moving objects from multiply scattered waves and multiple sensors

    International Nuclear Information System (INIS)

    Miranda, Analee; Cheney, Margaret

    2013-01-01

    In this paper, we develop a linearized imaging theory that combines the spatial, temporal and spectral components of multiply scattered waves as they scatter from moving objects. In particular, we consider the case of multiple fixed sensors transmitting and receiving information from multiply scattered waves. We use a priori information about the multipath background. We use a simple model for multiple scattering, namely scattering from a fixed, perfectly reflecting (mirror) plane. We base our image reconstruction and velocity estimation technique on a modification of a filtered backprojection method that produces a phase-space image. We plot examples of point-spread functions for different geometries and waveforms, and from these plots, we estimate the resolution in space and velocity. Through this analysis, we are able to identify how the imaging system depends on parameters such as bandwidth and number of sensors. We ultimately show that enhanced phase-space resolution for a distribution of moving and stationary targets in a multipath environment may be achieved using multiple sensors. (paper)

  11. Average glandular dose in paired digital mammography and digital breast tomosynthesis acquisitions in a population based screening program: effects of measuring breast density, air kerma and beam quality

    Science.gov (United States)

    Helge Østerås, Bjørn; Skaane, Per; Gullien, Randi; Catrine Trægde Martinsen, Anne

    2018-02-01

    The main purpose was to compare average glandular dose (AGD) for same-compression digital mammography (DM) and digital breast tomosynthesis (DBT) acquisitions in a population based screening program, with and without breast density stratification, as determined by automatically calculated breast density (Quantra™). Secondary, to compare AGD estimates based on measured breast density, air kerma and half value layer (HVL) to DICOM metadata based estimates. AGD was estimated for 3819 women participating in the screening trial. All received craniocaudal and mediolateral oblique views of each breasts with paired DM and DBT acquisitions. Exposure parameters were extracted from DICOM metadata. Air kerma and HVL were measured for all beam qualities used to acquire the mammograms. Volumetric breast density was estimated using Quantra™. AGD was estimated using the Dance model. AGD reported directly from the DICOM metadata was also assessed. Mean AGD was 1.74 and 2.10 mGy for DM and DBT, respectively. Mean DBT/DM AGD ratio was 1.24. For fatty breasts: mean AGD was 1.74 and 2.27 mGy for DM and DBT, respectively. For dense breasts: mean AGD was 1.73 and 1.79 mGy, for DM and DBT, respectively. For breasts of similar thickness, dense breasts had higher AGD for DM and similar AGD for DBT. The DBT/DM dose ratio was substantially lower for dense compared to fatty breasts (1.08 versus 1.33). The average c-factor was 1.16. Using previously published polynomials to estimate glandularity from thickness underestimated the c-factor by 5.9% on average. Mean AGD error between estimates based on measurements (air kerma and HVL) versus DICOM header data was 3.8%, but for one mammography unit as high as 7.9%. Mean error of using the AGD value reported in the DICOM header was 10.7 and 13.3%, respectively. Thus, measurement of breast density, radiation dose and beam quality can substantially affect AGD estimates.

  12. An Improved Method of Pose Estimation for Lighthouse Base Station Extension.

    Science.gov (United States)

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-10-22

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.

  13. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    Science.gov (United States)

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  14. Group-Contribution based Property Estimation and Uncertainty analysis for Flammability-related Properties

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens

    2016-01-01

    regression and outlier treatment have been applied to achieve high accuracy. Furthermore, linear error propagation based on covariance matrix of estimated parameters was performed. Therefore, every estimated property value of the flammability-related properties is reported together with its corresponding 95......%-confidence interval of the prediction. Compared to existing models the developed ones have a higher accuracy, are simple to apply and provide uncertainty information on the calculated prediction. The average relative error and correlation coefficient are 11.5% and 0.99 for LFL, 15.9% and 0.91 for UFL, 2...

  15. A "1"3"7Cs erosion model with moving boundary

    International Nuclear Information System (INIS)

    Yin, Chuan; Ji, Hongbing

    2015-01-01

    A novel quantitative model of the relationship between diffused concentration changes and erosion rates using assessment of soil losses was developed. It derived from the analysis of surface soil "1"3"7Cs flux variation under persistent erosion effect and based on the principle of geochemistry kinetics moving boundary. The new moving boundary model improves the basic simplified transport model (Zhang et al., 2008), and mainly applies to uniform rainfall areas which show a long-time soil erosion. The simulation results for this kind of erosion show under a long-time soil erosion, the influence of "1"3"7Cs concentration will decrease exponentially with increasing depth. Using the new model fit to the measured "1"3"7Cs depth distribution data in Zunyi site, Guizhou Province, China which has typical uniform rainfall provided a good fit with R"2 = 0.92. To compare the soil erosion rates calculated by the simple transport model and the new model, we take the Kaixian reference profile as example. The soil losses estimated by the previous simplified transport model are greater than those estimated by the new moving boundary model, which is consistent with our expectations. - Highlights: • The diffused moving boundary principle analysing "1"3"7Cs flux variation. • The new erosion model applies to uniform rainfall areas. • The erosion effect on "1"3"7Cs will decrease exponentially with increasing depth. • The new model provides two methods of calculating erosion rate.

  16. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  17. Forecast of Frost Days Based on Monthly Temperatures

    Science.gov (United States)

    Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.

    2009-04-01

    Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.

  18. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  19. Variation estimation of the averaged cross sections in the direct and adjoint fluxes

    International Nuclear Information System (INIS)

    Cardoso, Carlos Eduardo Santos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    1995-01-01

    There are several applications of the perturbation theory to specifics problems of reactor physics, such as nonuniform fuel burnup, nonuniform poison accumulation and evaluations of Doppler effects on reactivity. The neutron fluxes obtained from the solutions of direct and adjoint diffusion equations, are used in these applications. In the adjoint diffusion equation has been used the group constants averaged in the energy-dependent direct neutron flux, that it is not theoretically consistent. In this paper it is presented a method to calculate the energy-dependent adjoint neutron flux, to obtain the average group-constant that will be used in the adjoint diffusion equation. The method is based on the solution of the adjoint neutron balance equations, that were derived for a two regions cell. (author). 5 refs, 2 figs, 1 tab

  20. Low-resolution Airborne Radar Air/ground Moving Target Classification and Recognition

    Directory of Open Access Journals (Sweden)

    Wang Fu-you

    2014-10-01

    Full Text Available Radar Target Recognition (RTR is one of the most important needs of modern and future airborne surveillance radars, and it is still one of the key technologies of radar. The majority of present algorithms are based on wide-band radar signal, which not only needs high performance radar system and high target Signal-to-Noise Ratio (SNR, but also is sensitive to angle between radar and target. Low-Resolution Airborne Surveillance Radar (LRASR in downward-looking mode, slow flying aircraft and ground moving truck have similar Doppler velocity and Radar Cross Section (RCS, leading to the problem that LRASR air/ground moving targets can not be distinguished, which also disturbs detection, tracking, and classification of low altitude slow flying aircraft to solve these issues, an algorithm based on narrowband fractal feature and phase modulation feature is presented for LRASR air/ground moving targets classification. Real measured data is applied to verify the algorithm, the classification results validate the proposed method, helicopters and truck can be well classified, the average discrimination rate is more than 89% when SNR ≥ 15 dB.

  1. WE-EF-207-04: An Inter-Projection Sensor Fusion (IPSF) Approach to Estimate Missing Projection Signal in Synchronized Moving Grid (SMOG) System

    International Nuclear Information System (INIS)

    Zhang, H; Kong, V; Jin, J; Ren, L; Zhang, Y; Giles, W

    2015-01-01

    Purpose: A synchronized moving grid (SMOG) has been proposed to reduce scatter and lag artifacts in cone beam computed tomography (CBCT). However, information is missing in each projection because certain areas are blocked by the grid. A previous solution to this issue is acquiring 2 complimentary projections at each position, which increases scanning time. This study reports our first Result using an inter-projection sensor fusion (IPSF) method to estimate missing projection in our prototype SMOG-based CBCT system. Methods: An in-house SMOG assembling with a 1:1 grid of 3 mm gap has been installed in a CBCT benchtop. The grid moves back and forth in a 3-mm amplitude and up-to 20-Hz frequency. A control program in LabView synchronizes the grid motion with the platform rotation and x-ray firing so that the grid patterns for any two neighboring projections are complimentary. A Catphan was scanned with 360 projections. After scatter correction, the IPSF algorithm was applied to estimate missing signal for each projection using the information from the 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct CBCT images. The CBCTs were compared to those reconstructed using normal projections without applying the SMOG system. Results: The SMOG-IPSF method may reduce image dose by half due to the blocked radiation by the grid. The method almost completely removed scatter related artifacts, such as the cupping artifacts. The evaluation of line pair patterns in the CatPhan suggested that the spatial resolution degradation was minimal. Conclusion: The SMOG-IPSF is promising in reducing scatter artifacts and improving image quality while reducing radiation dose

  2. WE-EF-207-04: An Inter-Projection Sensor Fusion (IPSF) Approach to Estimate Missing Projection Signal in Synchronized Moving Grid (SMOG) System

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, H; Kong, V; Jin, J [Georgia Regents University Cancer Center, Augusta, GA (Georgia); Ren, L; Zhang, Y; Giles, W [Duke University Medical Center, Durham, NC (United States)

    2015-06-15

    Purpose: A synchronized moving grid (SMOG) has been proposed to reduce scatter and lag artifacts in cone beam computed tomography (CBCT). However, information is missing in each projection because certain areas are blocked by the grid. A previous solution to this issue is acquiring 2 complimentary projections at each position, which increases scanning time. This study reports our first Result using an inter-projection sensor fusion (IPSF) method to estimate missing projection in our prototype SMOG-based CBCT system. Methods: An in-house SMOG assembling with a 1:1 grid of 3 mm gap has been installed in a CBCT benchtop. The grid moves back and forth in a 3-mm amplitude and up-to 20-Hz frequency. A control program in LabView synchronizes the grid motion with the platform rotation and x-ray firing so that the grid patterns for any two neighboring projections are complimentary. A Catphan was scanned with 360 projections. After scatter correction, the IPSF algorithm was applied to estimate missing signal for each projection using the information from the 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct CBCT images. The CBCTs were compared to those reconstructed using normal projections without applying the SMOG system. Results: The SMOG-IPSF method may reduce image dose by half due to the blocked radiation by the grid. The method almost completely removed scatter related artifacts, such as the cupping artifacts. The evaluation of line pair patterns in the CatPhan suggested that the spatial resolution degradation was minimal. Conclusion: The SMOG-IPSF is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.

  3. Shadow detection of moving objects based on multisource information in Internet of things

    Science.gov (United States)

    Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian

    2017-05-01

    Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.

  4. Move-by-move dynamics of the advantage in chess matches reveals population-level learning of the game.

    Directory of Open Access Journals (Sweden)

    Haroldo V Ribeiro

    Full Text Available The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player's advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player's advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.

  5. Move-by-move dynamics of the advantage in chess matches reveals population-level learning of the game.

    Science.gov (United States)

    Ribeiro, Haroldo V; Mendes, Renio S; Lenzi, Ervin K; del Castillo-Mussot, Marcelo; Amaral, Luís A N

    2013-01-01

    The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player's advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player's advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.

  6. Estimation of permafrost thawing rates in a sub-arctic catchment using recession flow analysis

    Directory of Open Access Journals (Sweden)

    S. W. Lyon

    2009-05-01

    Full Text Available Permafrost thawing is likely to change the flow pathways taken by water as it moves through arctic and sub-arctic landscapes. The location and distribution of these pathways directly influence the carbon and other biogeochemical cycling in northern latitude catchments. While permafrost thawing due to climate change has been observed in the arctic and sub-arctic, direct observations of permafrost depth are difficult to perform at scales larger than a local scale. Using recession flow analysis, it may be possible to detect and estimate the rate of permafrost thawing based on a long-term streamflow record. We demonstrate the application of this approach to the sub-arctic Abiskojokken catchment in northern Sweden. Based on recession flow analysis, we estimate that permafrost in this catchment may be thawing at an average rate of about 0.9 cm/yr during the past 90 years. This estimated thawing rate is consistent with direct observations of permafrost thawing rates, ranging from 0.7 to 1.3 cm/yr over the past 30 years in the region.

  7. Segmentation of Moving Object Using Background Subtraction Method in Complex Environments

    Directory of Open Access Journals (Sweden)

    S. Kumar

    2016-06-01

    Full Text Available Background subtraction is an extensively used approach to localize the moving object in a video sequence. However, detecting an object under the spatiotemporal behavior of background such as rippling of water, moving curtain and illumination change or low resolution is not a straightforward task. To deal with the above-mentioned problem, we address a background maintenance scheme based on the updating of background pixels by estimating the current spatial variance along the temporal line. The work is focused to immune the variation of local motion in the background. Finally, the most suitable label assignment to the motion field is estimated and optimized by using iterated conditional mode (ICM under a Markovian framework. Performance evaluation and comparisons with the other well-known background subtraction methods show that the proposed method is unaffected by the problem of aperture distortion, ghost image, and high frequency noise.

  8. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  9. Through-the-Wall Localization of a Moving Target by Two Independent Ultra Wideband (UWB Radar Systems

    Directory of Open Access Journals (Sweden)

    Jana Rovňáková

    2013-09-01

    Full Text Available In the case of through-the-wall localization of moving targets by ultra wideband (UWB radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered.

  10. Online dynamic equalization adjustment of high-power lithium-ion battery packs based on the state of balance estimation

    International Nuclear Information System (INIS)

    Wang, Shunli; Shang, Liping; Li, Zhanfeng; Deng, Hu; Li, Jianchao

    2016-01-01

    Highlights: • A novel concept (SOB, State of Balance) is proposed for the LIB pack equalization. • Core parameter detection and filtering is analyzed to identify the LIB pack behavior. • The electrical UKF model is adopted for the online dynamic estimation. • The equalization target model is built based on the optimum preference. • Comprehensive imbalance state calculation is implemented for the adjustment. - Abstract: A novel concept named as state of balance (SOB) is proposed and its online dynamic estimation method is presented for the high-power lithium-ion battery (LIB) packs, based on which the online dynamic equalization adjustment is realized aiming to protect the operation safety of its power supply application. The core parameter detection method based on the specific moving average algorithm is studied because of their identical varying characteristics on the individual cells due to the manufacturing variability and other factors, affecting the performance of the high-power LIB pack. The SOB estimation method is realized with the detailed deduction, in which a dual filter consisting of the Unscented Kalman filter (UKF), equivalent circuit model (ECM) and open circuit voltage (OCV) is used in order to predict the SOB state. It is beneficial for the energy operation and the energy performance state can be evaluated online prior to the adjustment method based on the terminal voltage consistency. The energy equalization is realized that is based on the credibility reasoning together with the equalization model building process. The experiments including the core parameter detection, SOB estimation and equalization adjustment are done and the experimental results are analyzed. The experiment results show that the numerical Coulomb efficiency is bigger than 95%. The cell voltage measurement error is less than 5 mV and the terminal voltage measurement error of the LIB pack is less than 1% FS. The measurement error of the battery discharge and charge

  11. Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data

    Directory of Open Access Journals (Sweden)

    Wei-Kuang Lai

    2016-02-01

    Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.

  12. Room Volume Estimation Based on Ambiguity of Short-Term Interaural Phase Differences Using Humanoid Robot Head

    Directory of Open Access Journals (Sweden)

    Ryuichi Shimoyama

    2016-07-01

    Full Text Available Humans can recognize approximate room size using only binaural audition. However, sound reverberation is not negligible in most environments. The reverberation causes temporal fluctuations in the short-term interaural phase differences (IPDs of sound pressure. This study proposes a novel method for a binaural humanoid robot head to estimate room volume. The method is based on the statistical properties of the short-term IPDs of sound pressure. The humanoid robot turns its head toward a sound source, recognizes the sound source, and then estimates the ego-centric distance by its stereovision. By interpolating the relations between room volume, average standard deviation, and ego-centric distance experimentally obtained for various rooms in a prepared database, the room volume was estimated by the binaural audition of the robot from the average standard deviation of the short-term IPDs at the estimated distance.

  13. Translating HbA1c measurements into estimated average glucose values in pregnant women with diabetes.

    Science.gov (United States)

    Law, Graham R; Gilthorpe, Mark S; Secher, Anna L; Temple, Rosemary; Bilous, Rudolf; Mathiesen, Elisabeth R; Murphy, Helen R; Scott, Eleanor M

    2017-04-01

    This study aimed to examine the relationship between average glucose levels, assessed by continuous glucose monitoring (CGM), and HbA 1c levels in pregnant women with diabetes to determine whether calculations of standard estimated average glucose (eAG) levels from HbA 1c measurements are applicable to pregnant women with diabetes. CGM data from 117 pregnant women (89 women with type 1 diabetes; 28 women with type 2 diabetes) were analysed. Average glucose levels were calculated from 5-7 day CGM profiles (mean 1275 glucose values per profile) and paired with a corresponding (±1 week) HbA 1c measure. In total, 688 average glucose-HbA 1c pairs were obtained across pregnancy (mean six pairs per participant). Average glucose level was used as the dependent variable in a regression model. Covariates were gestational week, study centre and HbA 1c . There was a strong association between HbA 1c and average glucose values in pregnancy (coefficient 0.67 [95% CI 0.57, 0.78]), i.e. a 1% (11 mmol/mol) difference in HbA 1c corresponded to a 0.67 mmol/l difference in average glucose. The random effects model that included gestational week as a curvilinear (quadratic) covariate fitted best, allowing calculation of a pregnancy-specific eAG (PeAG). This showed that an HbA 1c of 8.0% (64 mmol/mol) gave a PeAG of 7.4-7.7 mmol/l (depending on gestational week), compared with a standard eAG of 10.2 mmol/l. The PeAG associated with maintaining an HbA 1c level of 6.0% (42 mmol/mol) during pregnancy was between 6.4 and 6.7 mmol/l, depending on gestational week. The HbA 1c -average glucose relationship is altered by pregnancy. Routinely generated standard eAG values do not account for this difference between pregnant and non-pregnant individuals and, thus, should not be used during pregnancy. Instead, the PeAG values deduced in the current study are recommended for antenatal clinical care.

  14. Adaptive Spectral Doppler Estimation

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-01-01

    . The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...

  15. Pose and Motion Estimation Using Dual Quaternion-Based Extended Kalman Filtering

    Energy Technology Data Exchange (ETDEWEB)

    Goddard, J.S.; Abidi, M.A.

    1998-06-01

    A solution to the remote three-dimensional (3-D) measurement problem is presented for a dynamic system given a sequence of two-dimensional (2-D) intensity images of a moving object. The 3-D transformation is modeled as a nonlinear stochastic system with the state estimate providing the six-degree-of-freedom motion and position values as well as structure. The stochastic model uses the iterated extended Kalman filter (IEKF) as a nonlinear estimator and a screw representation of the 3-D transformation based on dual quaternions. Dual quaternions, whose elements are dual numbers, provide a means to represent both rotation and translation in a unified notation. Linear object features, represented as dual vectors, are transformed using the dual quaternion transformation and are then projected to linear features in the image plane. The method has been implemented and tested with both simulated and actual experimental data. Simulation results are provided, along with comparisons to a point-based IEKF method using rotation and translation, to show the relative advantages of this method. Experimental results from testing using a camera mounted on the end effector of a robot arm are also given.

  16. Moving Object Tracking and Avoidance Algorithm for Differential Driving AGV Based on Laser Measurement Technology

    Directory of Open Access Journals (Sweden)

    Pandu Sandi Pratama

    2012-12-01

    Full Text Available This paper proposed an algorithm to track the obstacle position and avoid the moving objects for differential driving Automatic Guided Vehicles (AGV system in industrial environment. This algorithm has several abilities such as: to detect the moving objects, to predict the velocity and direction of moving objects, to predict the collision possibility and to plan the avoidance maneuver. For sensing the local environment and positioning, the laser measurement system LMS-151 and laser navigation system NAV-200 are applied. Based on the measurement results of the sensors, the stationary and moving obstacles are detected and the collision possibility is calculated. The velocity and direction of the obstacle are predicted using Kalman filter algorithm. Collision possibility, time, and position can be calculated by comparing the AGV movement and obstacle prediction result obtained by Kalman filter. Finally the avoidance maneuver using the well known tangent Bug algorithm is decided based on the calculation data. The effectiveness of proposed algorithm is verified using simulation and experiment. Several examples of experiment conditions are presented using stationary obstacle, and moving obstacles. The simulation and experiment results show that the AGV can detect and avoid the obstacles successfully in all experimental condition. [Keywords— Obstacle avoidance, AGV, differential drive, laser measurement system, laser navigation system].

  17. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    Science.gov (United States)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  18. The impact of using weight estimated from mammographic images vs. self-reported weight on breast cancer risk calculation

    Science.gov (United States)

    Nair, Kalyani P.; Harkness, Elaine F.; Gadde, Soujanye; Lim, Yit Y.; Maxwell, Anthony J.; Moschidis, Emmanouil; Foden, Philip; Cuzick, Jack; Brentnall, Adam; Evans, D. Gareth; Howell, Anthony; Astley, Susan M.

    2017-03-01

    Personalised breast screening requires assessment of individual risk of breast cancer, of which one contributory factor is weight. Self-reported weight has been used for this purpose, but may be unreliable. We explore the use of volume of fat in the breast, measured from digital mammograms. Volumetric breast density measurements were used to determine the volume of fat in the breasts of 40,431 women taking part in the Predicting Risk Of Cancer At Screening (PROCAS) study. Tyrer-Cuzick risk using self-reported weight was calculated for each woman. Weight was also estimated from the relationship between self-reported weight and breast fat volume in the cohort, and used to re-calculate Tyrer-Cuzick risk. Women were assigned to risk categories according to 10 year risk (below average =8%) and the original and re-calculated Tyrer-Cuzick risks were compared. Of the 716 women diagnosed with breast cancer during the study, 15 (2.1%) moved into a lower risk category, and 37 (5.2%) moved into a higher category when using weight estimated from breast fat volume. Of the 39,715 women without a cancer diagnosis, 1009 (2.5%) moved into a lower risk category, and 1721 (4.3%) into a higher risk category. The majority of changes were between below average and average risk categories (38.5% of those with a cancer diagnosis, and 34.6% of those without). No individual moved more than one risk group. Automated breast fat measures may provide a suitable alternative to self-reported weight for risk assessment in personalized screening.

  19. Evaluation of the reliability of transport networks based on the stochastic flow of moving objects

    International Nuclear Information System (INIS)

    Wu Weiwei; Ning, Angelika; Ning Xuanxi

    2008-01-01

    In transport networks, human beings are moving objects whose moving direction is stochastic in emergency situations. Based on this idea, a new model-stochastic moving network (SMN) is proposed. It is different from binary-state networks and stochastic-flow networks. The flow of SMNs has multiple-saturated states, that correspond to different flow values in each arc. In this paper, we try to evaluate the system reliability, defined as the probability that the saturated flow of the network is not less than a given demand d. Based on this new model, we obtain the flow probability distribution of every arc by simulation. An algorithm based on the blocking cutset of the SMN is proposed to evaluate the network reliability. An example is used to show how to calculate the corresponding reliabilities for different given demands of the SMN. Simulation experiments of different size were made and the system reliability precision was calculated. The precision of simulation results also discussed

  20. Permeability Estimation of Rock Reservoir Based on PCA and Elman Neural Networks

    Science.gov (United States)

    Shi, Ying; Jian, Shaoyong

    2018-03-01

    an intelligent method which based on fuzzy neural networks with PCA algorithm, is proposed to estimate the permeability of rock reservoir. First, the dimensionality reduction process is utilized for these parameters by principal component analysis method. Further, the mapping relationship between rock slice characteristic parameters and permeability had been found through fuzzy neural networks. The estimation validity and reliability for this method were tested with practical data from Yan’an region in Ordos Basin. The result showed that the average relative errors of permeability estimation for this method is 6.25%, and this method had the better convergence speed and more accuracy than other. Therefore, by using the cheap rock slice related information, the permeability of rock reservoir can be estimated efficiently and accurately, and it is of high reliability, practicability and application prospect.

  1. Numerical estimation on balance coefficients of central difference averaging method for quench detection of the KSTAR PF coils

    International Nuclear Information System (INIS)

    Kim, Jin Sub; An, Seok Chan; Ko, Tae Kuk; Chu, Yong

    2016-01-01

    A quench detection system of KSTAR Poloidal Field (PF) coils is inevitable for stable operation because normal zone generates overheating during quench occurrence. Recently, new voltage quench detection method, combination of Central Difference Averaging (CDA) and Mutual Inductance Compensation (MIK) for compensating mutual inductive voltage more effectively than conventional voltage detection method, has been suggested and studied. For better performance of mutual induction cancellation by adjacent coils of CDA+MIK method for KSTAR coil system, balance coefficients of CDA must be estimated and adjusted preferentially. In this paper, the balance coefficients of CDA for KSTAR PF coils were numerically estimated. The estimated result was adopted and tested by using simulation. The CDA method adopting balance coefficients effectively eliminated mutual inductive voltage, and also it is expected to improve performance of CDA+MIK method for quench detection of KSTAR PF coils

  2. A fuel-based approach to estimating motor vehicle exhaust emissions

    Science.gov (United States)

    Singer, Brett Craig

    Motor vehicles contribute significantly to air pollution problems; accurate motor vehicle emission inventories are therefore essential to air quality planning. Current travel-based inventory models use emission factors measured from potentially biased vehicle samples and predict fleet-average emissions which are often inconsistent with on-road measurements. This thesis presents a fuel-based inventory approach which uses emission factors derived from remote sensing or tunnel-based measurements of on-road vehicles. Vehicle activity is quantified by statewide monthly fuel sales data resolved to the air basin level. Development of the fuel-based approach includes (1) a method for estimating cold start emission factors, (2) an analysis showing that fuel-normalized emission factors are consistent over a range of positive vehicle loads and that most fuel use occurs during loaded-mode driving, (3) scaling factors relating infrared hydrocarbon measurements to total exhaust volatile organic compound (VOC) concentrations, and (4) an analysis showing that economic factors should be considered when selecting on-road sampling sites. The fuel-based approach was applied to estimate carbon monoxide (CO) emissions from warmed-up vehicles in the Los Angeles area in 1991, and CO and VOC exhaust emissions for Los Angeles in 1997. The fuel-based CO estimate for 1991 was higher by a factor of 2.3 +/- 0.5 than emissions predicted by California's MVEI 7F model. Fuel-based inventory estimates for 1997 were higher than those of California's updated MVEI 7G model by factors of 2.4 +/- 0.2 for CO and 3.5 +/- 0.6 for VOC. Fuel-based estimates indicate a 20% decrease in the mass of CO emitted, despite an 8% increase in fuel use between 1991 and 1997; official inventory models predict a 50% decrease in CO mass emissions during the same period. Cold start CO and VOC emission factors derived from parking garage measurements were lower than those predicted by the MVEI 7G model. Current inventories

  3. A Vision-Based Approach to Fire Detection

    Directory of Open Access Journals (Sweden)

    Pedro Gomes

    2014-09-01

    Full Text Available This paper presents a vision-based method for fire detection from fixed surveillance smart cameras. The method integrates several well-known techniques properly adapted to cope with the challenges related to the actual deployment of the vision system. Concretely, background subtraction is performed with a context-based learning mechanism so as to attain higher accuracy and robustness. The computational cost of a frequency analysis of potential fire regions is reduced by means of focusing its operation with an attentive mechanism. For fast discrimination between fire regions and fire-coloured moving objects, a new colour-based model of fire's appearance and a new wavelet-based model of fire's frequency signature are proposed. To reduce the false alarm rate due to the presence of fire-coloured moving objects, the category and behaviour of each moving object is taken into account in the decision-making. To estimate the expected object's size in the image plane and to generate geo-referenced alarms, the camera-world mapping is approximated with a GPS-based calibration process. Experimental results demonstrate the ability of the proposed method to detect fires with an average success rate of 93.1% at a processing rate of 10 Hz, which is often sufficient for real-life applications.

  4. WALS Estimation and Forecasting in Factor-based Dynamic Models with an Application to Armenia

    OpenAIRE

    Poghosyan, Karen; Magnus, Jan R.

    2012-01-01

    Two model averaging approaches are used and compared in estimating and forecasting dynamic factor models, the well-known Bayesian model averaging (BMA) and the recently developed weighted average least squares (WALS). Both methods propose to combine frequentist estimators using Bayesian weights. We apply our framework to the Armenian economy using quarterly data from 20002010, and we estimate and forecast real GDP growth and inflation.

  5. The average cost of measles cases and adverse events following vaccination in industrialised countries

    Directory of Open Access Journals (Sweden)

    Kou Ulla

    2002-09-01

    Full Text Available Abstract Background Even though the annual incidence rate of measles has dramatically decreased in industrialised countries since the implementation of universal immunisation programmes, cases continue to occur in countries where endemic measles transmission has been interrupted and in countries where adequate levels of immunisation coverage have not been maintained. The objective of this study is to develop a model to estimate the average cost per measles case and per adverse event following measles immunisation using the Netherlands (NL, the United Kingdom (UK and Canada as examples. Methods Parameter estimates were based on a review of the published literature. A decision tree was built to represent the complications associated with measles cases and adverse events following imminisation. Monte-Carlo Simulation techniques were used to account for uncertainty. Results From the perspective of society, we estimated the average cost per measles case to be US$276, US$307 and US$254 for the NL, the UK and Canada, respectively, and the average cost of adverse events following immunisation per vaccinee to be US$1.43, US$1.93 and US$1.51 for the NL, UK and Canada, respectively. Conclusions These average cost estimates could be combined with incidence estimates and costs of immunisation programmes to provide estimates of the cost of measles to industrialised countries. Such estimates could be used as a basis to estimate the potential economic gains of global measles eradication.

  6. Treatment of petroleum refinery wastewater using a sequential anaerobic-aerobic moving-bed biofilm reactor system based on suspended ceramsite.

    Science.gov (United States)

    Lu, Mang; Gu, Li-Peng; Xu, Wen-Hao

    2013-01-01

    In this study, a novel suspended ceramsite was prepared, which has high strength, optimum density (close to water), and high porosity. The ceramsite was used to feed a moving-bed biofilm reactor (MBBR) system with an anaerobic-aerobic (A/O) arrangement to treat petroleum refinery wastewater for simultaneous removal of chemical oxygen demand (COD) and ammonium. The hydraulic retention time (HRT) of the anaerobic-aerobic MBBR system was varied from 72 to 18 h. The anaerobic-aerobic system had a strong tolerance to shock loading. Compared with the professional emission standard of China, the effluent concentrations of COD and NH3-N in the system could satisfy grade I at HRTs of 72 and 36 h, and grade II at HRT of 18 h. The average sludge yield of the anaerobic reactor was estimated to be 0.0575 g suspended solid/g CODremoved. This work demonstrated that the anaerobic-aerobic MBBR system using the suspended ceramsite as bio-carrier could be applied to achieving high wastewater treatment efficiency.

  7. WALS estimation and forecasting in factor-based dynamic models with an application to Armenia

    NARCIS (Netherlands)

    Poghosyan, K.; Magnus, J.R.

    2012-01-01

    Two model averaging approaches are used and compared in estimating and forecasting dynamic factor models, the well-known Bayesian model averaging (BMA) and the recently developed weighted average least squares (WALS). Both methods propose to combine frequentist estimators using Bayesian weights. We

  8. The Effect of Direction on Cursor Moving Kinematics

    Directory of Open Access Journals (Sweden)

    Chiu-Ping Lu

    2012-02-01

    Full Text Available There have been only few studies to substantiate the kinematic characteristics of cursor movement. In this study, a quantitative experimental research method was used to explore the effect of moving direction on the kinematics of cursor movement in 24 typical young persons using our previously developed computerized measuring program. The results of multiple one way repeated measures ANOVAs and post hoc LSD tests demonstrated that the moving direction had effects on average velocity, movement time, movement unit and peak velocity. Moving leftward showed better efficiency than moving rightward, upward and downward from the kinematic evidences such as velocity, movement unit and time. Moreover, the unique pattern of the power spectral density (PSD of velocity (strategy for power application explained why the smoothness was still maintained while moving leftward even under an unstable situation with larger momentum. Moreover, the information from this cursor moving study can guide us to relocate the toolbars and icons in the window interface, especially for individuals with physical disabilities whose performances are easily interrupted while controlling the cursor in specific directions.

  9. Moving from Virtual Reality Exposure-Based Therapy to Augmented Reality Exposure-Based Therapy: A Review

    OpenAIRE

    Baus, Oliver; Bouchard, Stéphane

    2014-01-01

    This paper reviews the move from virtual reality exposure-based therapy to augmented reality exposure-based therapy (ARET). Unlike virtual reality (VR), which entails a complete virtual environment (VE), augmented reality (AR) limits itself to producing certain virtual elements to then merge them into the view of the physical world. Although, the general public may only have become aware of AR in the last few years, AR type applications have been around since beginning of the twentieth centur...

  10. Transport of the moving barrier driven by chiral active particles

    Science.gov (United States)

    Liao, Jing-jing; Huang, Xiao-qun; Ai, Bao-quan

    2018-03-01

    Transport of a moving V-shaped barrier exposed to a bath of chiral active particles is investigated in a two-dimensional channel. Due to the chirality of active particles and the transversal asymmetry of the barrier position, active particles can power and steer the directed transport of the barrier in the longitudinal direction. The transport of the barrier is determined by the chirality of active particles. The moving barrier and active particles move in the opposite directions. The average velocity of the barrier is much larger than that of active particles. There exist optimal parameters (the chirality, the self-propulsion speed, the packing fraction, and the channel width) at which the average velocity of the barrier takes its maximal value. In particular, tailoring the geometry of the barrier and the active concentration provides novel strategies to control the transport properties of micro-objects or cargoes in an active medium.

  11. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning.

    Science.gov (United States)

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2015-12-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors

  12. Evaluation of physical sampling efficiency for cyclone-based personal bioaerosol samplers in moving air environments.

    Science.gov (United States)

    Su, Wei-Chung; Tolchinsky, Alexander D; Chen, Bean T; Sigaev, Vladimir I; Cheng, Yung Sung

    2012-09-01

    The need to determine occupational exposure to bioaerosols has notably increased in the past decade, especially for microbiology-related workplaces and laboratories. Recently, two new cyclone-based personal bioaerosol samplers were developed by the National Institute for Occupational Safety and Health (NIOSH) in the USA and the Research Center for Toxicology and Hygienic Regulation of Biopreparations (RCT & HRB) in Russia to monitor bioaerosol exposure in the workplace. Here, a series of wind tunnel experiments were carried out to evaluate the physical sampling performance of these two samplers in moving air conditions, which could provide information for personal biological monitoring in a moving air environment. The experiments were conducted in a small wind tunnel facility using three wind speeds (0.5, 1.0 and 2.0 m s(-1)) and three sampling orientations (0°, 90°, and 180°) with respect to the wind direction. Monodispersed particles ranging from 0.5 to 10 μm were employed as the test aerosols. The evaluation of the physical sampling performance was focused on the aspiration efficiency and capture efficiency of the two samplers. The test results showed that the orientation-averaged aspiration efficiencies of the two samplers closely agreed with the American Conference of Governmental Industrial Hygienists (ACGIH) inhalable convention within the particle sizes used in the evaluation tests, and the effect of the wind speed on the aspiration efficiency was found negligible. The capture efficiencies of these two samplers ranged from 70% to 80%. These data offer important information on the insight into the physical sampling characteristics of the two test samplers.

  13. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept

    Science.gov (United States)

    Lee, Ho; Fahimian, Benjamin P.; Xing, Lei

    2017-03-01

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  14. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-08

    Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters

  15. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    Science.gov (United States)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  16. SAR Imaging of Ground Moving Targets with Non-ideal Motion Error Compensation(in English

    Directory of Open Access Journals (Sweden)

    Zhou Hui

    2015-06-01

    Full Text Available Conventional ground moving target imaging algorithms mainly focus on the range cell migration correction and the motion parameter estimation of the moving target. However, in real Synthetic Aperture Radar (SAR data processing, non-ideal motion error compensation is also a critical process, which focuses and has serious impacts on the imaging quality of moving targets. Non-ideal motion error can not be compensated by either the stationary SAR motion error compensation algorithms or the autofocus techniques. In this paper, two sorts of non-ideal motion errors that affect the Doppler centroid of the moving target is analyzed, and a novel non-ideal motion error compensation algorithm is proposed based on the Inertial Navigation System (INS data and the range walk trajectory. Simulated and real data processing results are provided to demonstrate the effectiveness of the proposed algorithm.

  17. Are risk estimates biased in follow-up studies of psychosocial factors with low base-line participation?

    Directory of Open Access Journals (Sweden)

    Andersen Johan

    2011-07-01

    Full Text Available Abstract Background Low participation in population-based follow-up studies addressing psychosocial risk factors may cause biased estimation of health risk but the issue has seldom been examined. We compared risk estimates for selected health outcomes among respondents and the entire source population. Methods In a Danish cohort study of associations between psychosocial characteristics of the work environment and mental health, the source population of public service workers comprised 10,036 employees in 502 work units of which 4,489 participated (participation rate 45%. Data on the psychosocial work environment were obtained for each work unit by calculating the average of the employee self-reports. The average values were assigned all employees and non-respondent at the work unit. Outcome data on sick leave and prescription of antidepressant medication during the follow-up period (1.4.2007-31.12.2008 was obtained by linkage to national registries. Results Respondents differed at baseline from non-respondents by gender, age, employment status, sick leave and hospitalization for affective disorders. However, risk estimates for sick leave and prescription of antidepressant medication, during follow-up, based on the subset of participants, did only differ marginally from risk estimates based upon the entire population. Conclusions We found no indications that low participation at baseline distorts the estimates of associations between the work unit level of psychosocial work environment and mental health outcomes during follow-up. These results may not be valid for other exposures or outcomes.

  18. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  19. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2007-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  20. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  1. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  2. A Moving Object Detection Algorithm Based on Color Information

    International Nuclear Information System (INIS)

    Fang, X H; Xiong, W; Hu, B J; Wang, L T

    2006-01-01

    This paper designed a new algorithm of moving object detection for the aim of quick moving object detection and orientation, which used a pixel and its neighbors as an image vector to represent that pixel and modeled different chrominance component pixel as a mixture of Gaussians, and set up different mixture model of Gauss for different YUV chrominance components. In order to make full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background

  3. MAGNETO-CONVECTION AND LITHIUM AGE ESTIMATES OF THE β PICTORIS MOVING GROUP

    International Nuclear Information System (INIS)

    Macdonald, J.; Mullan, D. J.

    2010-01-01

    Although the means of the ages of stars in young groups determined from Li depletion often agree with mean ages determined from Hertzsprung-Russell (H-R) diagram isochrones, there are often statistically significant differences in the ages of individual stars determined by the two methods. We find that inclusion of the effects of inhibition of convection due to the presence of magnetic fields leads to consistent ages for the individual stars. We illustrate how age consistency arises by applying our results to the β Pictoris moving group (BPMG). We find that, although magnetic inhibition of convection leads to increased ages from the H-R diagram isochrones for all stars, Li ages are decreased for fully convective M stars and increased for stars with radiative cores. Our consistent age determination for BPMG of 40 Myr is larger than previous determinations by a factor of about two. We have also considered models in which the mixing length ratio is adjusted to give consistent ages. We find that our magneto-convection models, which give quantitative estimates of magnetic field strength, provide a viable alternative to models in which the effects of magnetic fields (and other processes) are accounted for by reducing the mixing length ratio.

  4. Estimation of Residential Heat Pump Consumption for Flexibility Market Applications

    DEFF Research Database (Denmark)

    Kouzelis, Konstantinos; Tan, Zheng-Hua; Bak-Jensen, Birgitte

    2015-01-01

    load of a flexible device, namely a Heat Pump (HP), out of the aggregated energy consumption of a house. The main idea for accomplishing this, is a comparison of the flexible consumer with electrically similar non-flexible consumers. The methodology is based on machine learning techniques, probability...... theory and statistics. After presenting this methodology, the general trend of the HP consumption is estimated and an hour-ahead forecast is conducted by employing Seasonal Autoregressive Integrated Moving Average modeling. In this manner, the flexible consumption is predicted, establishing the basis......Recent technological advancements have facilitated the evolution of traditional distribution grids to smart grids. In a smart grid scenario, flexible devices are expected to aid the system in balancing the electric power in a technically and economically efficient way. To achieve this, the flexible...

  5. A comparison of average wages with age-specific wages for assessing indirect productivity losses: analytic simplicity versus analytic precision.

    Science.gov (United States)

    Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J

    2017-07-01

    Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.

  6. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  7. Vigilance on the move: video game-based measurement of sustained attention.

    Science.gov (United States)

    Szalma, J L; Schmidt, T N; Teo, G W L; Hancock, P A

    2014-01-01

    Vigilance represents the capacity to sustain attention to any environmental source of information over prolonged periods on watch. Most stimuli used in vigilance research over the previous six decades have been relatively simple and often purport to represent important aspects of detection and discrimination tasks in real-world settings. Such displays are most frequently composed of single stimulus presentations in discrete trials against a uniform, often uncluttered background. The present experiment establishes a dynamic, first-person perspective vigilance task in motion using a video-game environment. 'Vigilance on the move' is thus a new paradigm for the study of sustained attention. We conclude that the stress of vigilance extends to the new paradigm, but whether the performance decrement emerges depends upon specific task parameters. The development of the task, the issues to be resolved and the pattern of performance, perceived workload and stress associated with performing such dynamic vigilance are reported. The present experiment establishes a dynamic, first-person perspective movement-based vigilance task using a video-game environment. 'Vigilance on the move' is thus a new paradigm for the evaluation of sustained attention in operational environments in which individuals move as they monitor their environment. Issues addressed in task development are described.

  8. The Efficiency of OLS Estimators of Structural Parameters in a Simple Linear Regression Model in the Calibration of the Averages Scheme

    Directory of Open Access Journals (Sweden)

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Multiple areas of research function within its scope. One of the many fundamental questions in the model concerns proving the efficiency of the most commonly used OLS estimators and examining their properties. In the literature of the subject one can find taking back to this scope and certain solutions in that regard. Methodically, they are borrowed from the multiple regression model or also from a boundary partial model. Not everything, however, is here complete and consistent. In the paper a completely new scheme is proposed, based on the implementation of the Cauchy-Schwarz inequality in the arrangement of the constraint aggregated from calibrated appropriately secondary constraints of unbiasedness which in a result of choice the appropriate calibrator for each variable directly leads to showing this property. A separate range-is a matter of choice of such a calibrator. These deliberations, on account of the volume and kinds of the calibration, were divided into a few parts. In the one the efficiency of OLS estimators is proven in a mixed scheme of the calibration by averages, that is preliminary, and in the most basic frames of the proposed methodology. In these frames the future outlines and general premises constituting the base of more distant generalizations are being created.

  9. Clustering Batik Images using Fuzzy C-Means Algorithm Based on Log-Average Luminance

    Directory of Open Access Journals (Sweden)

    Ahmad Sanmorino

    2012-06-01

    Full Text Available Batik is a fabric or clothes that are made ​​with a special staining technique called wax-resist dyeing and is one of the cultural heritage which has high artistic value. In order to improve the efficiency and give better semantic to the image, some researchers apply clustering algorithm for managing images before they can be retrieved. Image clustering is a process of grouping images based on their similarity. In this paper we attempt to provide an alternative method of grouping batik image using fuzzy c-means (FCM algorithm based on log-average luminance of the batik. FCM clustering algorithm is an algorithm that works using fuzzy models that allow all data from all cluster members are formed with different degrees of membership between 0 and 1. Log-average luminance (LAL is the average value of the lighting in an image. We can compare different image lighting from one image to another using LAL. From the experiments that have been made, it can be concluded that fuzzy c-means algorithm can be used for batik image clustering based on log-average luminance of each image possessed.

  10. Extreme Learning Machine and Moving Least Square Regression Based Solar Panel Vision Inspection

    Directory of Open Access Journals (Sweden)

    Heng Liu

    2017-01-01

    Full Text Available In recent years, learning based machine intelligence has aroused a lot of attention across science and engineering. Particularly in the field of automatic industry inspection, the machine learning based vision inspection plays a more and more important role in defect identification and feature extraction. Through learning from image samples, many features of industry objects, such as shapes, positions, and orientations angles, can be obtained and then can be well utilized to determine whether there is defect or not. However, the robustness and the quickness are not easily achieved in such inspection way. In this work, for solar panel vision inspection, we present an extreme learning machine (ELM and moving least square regression based approach to identify solder joint defect and detect the panel position. Firstly, histogram peaks distribution (HPD and fractional calculus are applied for image preprocessing. Then an ELM-based defective solder joints identification is discussed in detail. Finally, moving least square regression (MLSR algorithm is introduced for solar panel position determination. Experimental results and comparisons show that the proposed ELM and MLSR based inspection method is efficient not only in detection accuracy but also in processing speed.

  11. A Real-Time Method to Estimate Speed of Object Based on Object Detection and Optical Flow Calculation

    Science.gov (United States)

    Liu, Kaizhan; Ye, Yunming; Li, Xutao; Li, Yan

    2018-04-01

    In recent years Convolutional Neural Network (CNN) has been widely used in computer vision field and makes great progress in lots of contents like object detection and classification. Even so, combining Convolutional Neural Network, which means making multiple CNN frameworks working synchronously and sharing their output information, could figure out useful message that each of them cannot provide singly. Here we introduce a method to real-time estimate speed of object by combining two CNN: YOLOv2 and FlowNet. In every frame, YOLOv2 provides object size; object location and object type while FlowNet providing the optical flow of whole image. On one hand, object size and object location help to select out the object part of optical flow image thus calculating out the average optical flow of every object. On the other hand, object type and object size help to figure out the relationship between optical flow and true speed by means of optics theory and priori knowledge. Therefore, with these two key information, speed of object can be estimated. This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.

  12. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer

    2015-12-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton\\'s iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  13. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem

    2015-01-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton's iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  14. Application of the Total Least Square ESPRIT Method to Estimation of Angular Coordinates of Moving Objects

    Directory of Open Access Journals (Sweden)

    Wojciech Rosloniec

    2010-01-01

    Full Text Available The TLS ESPRIT method is investigated in application to estimation of angular coordinates (angles of arrival of two moving objects at the presence of an external, relatively strong uncorrelated signal. As a radar antenna system, the 32-element uniform linear array (ULA is used. Various computer simulations have been carried out in order to demonstrate good accuracy and high spatial resolution of the TLS ESPRIT method in the scenario outlined above. It is also shown that accuracy and angle resolution can be significantly increased by using the proposed preprocessing (beamforming. The most of simulation results, presented in a graphical form, have been compared to the corresponding equivalent results obtained by using the ESPRIT method and conventional amplitude monopulse method aided by the coherent Doppler filtration.

  15. Neighborhoods on the move: a community-based participatory research approach to promoting physical activity.

    Science.gov (United States)

    Suminski, Richard R; Petosa, Rick L; Jones, Larry; Hall, Lisa; Poston, Carlos W

    2009-01-01

    There is a scientific and practical need for high-quality effectiveness studies of physical activity interventions in "real-world" settings. To use a community-based participatory research (CBPR) approach to develop, implement, operate, and evaluate an intervention for promoting physical activity called Neighborhoods on the Move. Two communities with similar physical and social characteristics participated in this study. One community was involved in Neighborhoods on the Move; the other (comparison community) participated only in the assessments. Academic personnel and residents/organizations in the Neighborhoods on the Move community worked together to create a community environment that was more conducive for physical activity. Pre- and posttest data on new initiatives promoting physical activity, existing physical activity initiatives, and business policies supporting physical activity were collected simultaneously in both communities. The success of the CBPR approach was evidenced by several developments, including substantial resident involvement and the formation of a leadership committee, marketing campaign, and numerous community partnerships. The number of businesses with policies promoting physical activity and breadth of existing physical activity initiatives (participants, activities, hours) increased substantially more in the Neighborhoods on the Move community than in the comparison community. A total of sixty new initiatives promoting physical activity were implemented in the Neighborhoods on the Move community during the intervention. The CBPR approach is an effective strategy for inducing environmental changes that promote physical activity. Additional research is needed to assess the portability and sustainability of Neighborhoods on the Move.

  16. Estimation of average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors by using the 134Cs/137Cs ratio method

    International Nuclear Information System (INIS)

    Endo, T.; Sato, S.; Yamamoto, A.

    2012-01-01

    Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the 134 Cs/ 137 Cs ratio method for measured radioactivities of 134 Cs and 137 Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured 134 Cs/ 137 Cs ratio from the contaminated soil is 0.996±0.07 as of March 11, 2011. Based on the 134 Cs/ 137 Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2±1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost the same evaluation values of 134 Cs/ 137 Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on 134 Cs/ 137 Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)

  17. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)

  18. The redistributive effect of the move from age-based to income-based prescription drug coverage in British Columbia, Canada.

    Science.gov (United States)

    Hanley, Gillian E; Morgan, Steve; Barer, Morris; Reid, Robert J

    2011-07-01

    To explore the redistributive impact of two different pharmaceutical financing policies (age-based versus income-based pharmacare) on the distribution of income in British Columbia (B.C.), Canada. Using household-level data on all payments that are used to finance prescription drugs in B.C. (including taxation and private payments), we performed a redistributive analysis to indicate how much income inequality in the province changed as a result of payments made for prescription drugs. We also illustrated changes in vertical equity (different treatment according to ability-to-pay) and horizontal equity (equals, according to ability-to-pay, being treated equally) between the two years separately through a pre-post policy examination. We found that payments made to finance prescription drugs increased overall income inequality in the province. This negative impact was larger after the move to income-based pharmacare. Our results also show increasing horizontal inequity after the policy change, and suggest that the increased reliance on out-of-pocket payments was a major source of the negative impact on the B.C.'s overall income distribution. We also show that the consequences of the move to income-based pharmacare would have been less severe had the level of public financing not decreased substantially between the two years. The increase in income inequality in B.C. following the policy change was an unintended consequence of the move to income-based pharmacare. This finding is worth consideration as countries and jurisdictions weigh pharmaceutical policy alternatives. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  19. Object Based Building Extraction and Building Period Estimation from Unmanned Aerial Vehicle Data

    Science.gov (United States)

    Comert, Resul; Kaplan, Onur

    2018-04-01

    The aim of this study is to examine whether it is possible to estimate the building periods with respect to the building heights in the urban scale seismic performance assessment studies by using the building height retrieved from the unmanned aerial vehicle (UAV) data. For this purpose, a small area, which includes eight residential reinforced concrete buildings, was selected in Eskisehir (Turkey) city center. In this paper, the possibilities of obtaining the building heights that are used in the estimation of building periods from UAV based data, have been investigated. The investigations were carried out in 3 stages; (i) Building boundary extraction with Object Based Image Analysis (OBIA), (ii) height calculation for buildings of interest from nDSM and accuracy assessment with the terrestrial survey. (iii) Estimation of building period using height information. The average difference between the periods estimated according to the heights obtained from field measurements and from the UAV data is 2.86 % and the maximum difference is 13.2 %. Results of this study have shown that the building heights retrieved from the UAV data can be used in the building period estimation in the urban scale vulnerability assessments.

  20. A Divergence Median-based Geometric Detector with A Weighted Averaging Filter

    Science.gov (United States)

    Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang

    2018-01-01

    To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.

  1. Improvement of force-sensor-based heart rate estimation using multichannel data fusion.

    Science.gov (United States)

    Bruser, Christoph; Kortelainen, Juha M; Winter, Stefan; Tenhunen, Mirja; Parkka, Juha; Leonhardt, Steffen

    2015-01-01

    The aim of this paper is to present and evaluate algorithms for heartbeat interval estimation from multiple spatially distributed force sensors integrated into a bed. Moreover, the benefit of using multichannel systems as opposed to a single sensor is investigated. While it might seem intuitive that multiple channels are superior to a single channel, the main challenge lies in finding suitable methods to actually leverage this potential. To this end, two algorithms for heart rate estimation from multichannel vibration signals are presented and compared against a single-channel sensing solution. The first method operates by analyzing the cepstrum computed from the average spectra of the individual channels, while the second method applies Bayesian fusion to three interval estimators, such as the autocorrelation, which are applied to each channel. This evaluation is based on 28 night-long sleep lab recordings during which an eight-channel polyvinylidene fluoride-based sensor array was used to acquire cardiac vibration signals. The recruited patients suffered from different sleep disorders of varying severity. From the sensor array data, a virtual single-channel signal was also derived for comparison by averaging the channels. The single-channel results achieved a beat-to-beat interval error of 2.2% with a coverage (i.e., percentage of the recording which could be analyzed) of 68.7%. In comparison, the best multichannel results attained a mean error and coverage of 1.0% and 81.0%, respectively. These results present statistically significant improvements of both metrics over the single-channel results (p < 0.05).

  2. Data base of system-average dose rates at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Beal, S.K.; Britz, W.L.; Cohen, S.C.; Goldin, A.S.; Goldin, D.J.

    1987-10-01

    In this work, a data base is derived of area dose rates for systems and components listed in the Energy Economic Data Base (EEDB). The data base is derived from area surveys obtained during outages at four boiling water reactors (BWRs) at three stations and eight pressurized water reactors (PWRs) at four stations. Separate tables are given for BWRs and PWRs. These tables may be combined with estimates of labor hours to provide order-of-magnitude estimates of exposure for purposes of regulatory analysis. They are only valid for work involving entire systems or components. The estimates of labor hours used in conjunction with the dose rates to estimate exposure must be adjusted to account for in-field time. Finally, the dose rates given in the data base do not reflect ALARA considerations. 11 refs., 2 figs., 3 tabs

  3. Force balance on two-dimensional superconductors with a single moving vortex

    Science.gov (United States)

    Chung, Chun Kit; Arahata, Emiko; Kato, Yusuke

    2014-03-01

    We study forces on two-dimensional superconductors with a single moving vortex based on a recent fully self-consistent calculation of DC conductivity in an s-wave superconductor (E. Arahata and Y. Kato, arXiv:1310.0566). By considering momentum balance of the whole liquid, we attempt to identify various contributions to the total transverse force on the vortex. This provides an estimation of the effective Magnus force based on the quasiclassical theory generalized by Kita [T. Kita, Phys. Rev. B, 64, 054503 (2001)], which allows for the Hall effect in vortex states.

  4. An interprojection sensor fusion approach to estimate blocked projection signal in synchronized moving grid-based CBCT system

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Hong; Kong, Vic [Department of Radiation Oncology, Georgia Regents University, Augusta, Georgia 30912 (United States); Ren, Lei; Giles, William; Zhang, You [Department of Radiation Oncology, Duke University, Durham, North Carolina 27710 (United States); Jin, Jian-Yue, E-mail: jjin@gru.edu [Department of Radiation Oncology, Georgia Regents University, Augusta, Georgia 30912 and Department of Radiology, Georgia Regents University, Augusta, Georgia 30912 (United States)

    2016-01-15

    Purpose: A preobject grid can reduce and correct scatter in cone beam computed tomography (CBCT). However, half of the signal in each projection is blocked by the grid. A synchronized moving grid (SMOG) has been proposed to acquire two complimentary projections at each gantry position and merge them into one complete projection. That approach, however, suffers from increased scanning time and the technical difficulty of accurately merging the two projections per gantry angle. Herein, the authors present a new SMOG approach which acquires a single projection per gantry angle, with complimentary grid patterns for any two adjacent projections, and use an interprojection sensor fusion (IPSF) technique to estimate the blocked signal in each projection. The method may have the additional benefit of reduced imaging dose due to the grid blocking half of the incident radiation. Methods: The IPSF considers multiple paired observations from two adjacent gantry angles as approximations of the blocked signal and uses a weighted least square regression of these observations to finally determine the blocked signal. The method was first tested with a simulated SMOG on a head phantom. The signal to noise ratio (SNR), which represents the difference of the recovered CBCT image to the original image without the SMOG, was used to evaluate the ability of the IPSF in recovering the missing signal. The IPSF approach was then tested using a Catphan phantom on a prototype SMOG assembly installed in a bench top CBCT system. Results: In the simulated SMOG experiment, the SNRs were increased from 15.1 and 12.7 dB to 35.6 and 28.9 dB comparing with a conventional interpolation method (inpainting method) for a projection and the reconstructed 3D image, respectively, suggesting that IPSF successfully recovered most of blocked signal. In the prototype SMOG experiment, the authors have successfully reconstructed a CBCT image using the IPSF-SMOG approach. The detailed geometric features in the

  5. An interprojection sensor fusion approach to estimate blocked projection signal in synchronized moving grid-based CBCT system

    International Nuclear Information System (INIS)

    Zhang, Hong; Kong, Vic; Ren, Lei; Giles, William; Zhang, You; Jin, Jian-Yue

    2016-01-01

    Purpose: A preobject grid can reduce and correct scatter in cone beam computed tomography (CBCT). However, half of the signal in each projection is blocked by the grid. A synchronized moving grid (SMOG) has been proposed to acquire two complimentary projections at each gantry position and merge them into one complete projection. That approach, however, suffers from increased scanning time and the technical difficulty of accurately merging the two projections per gantry angle. Herein, the authors present a new SMOG approach which acquires a single projection per gantry angle, with complimentary grid patterns for any two adjacent projections, and use an interprojection sensor fusion (IPSF) technique to estimate the blocked signal in each projection. The method may have the additional benefit of reduced imaging dose due to the grid blocking half of the incident radiation. Methods: The IPSF considers multiple paired observations from two adjacent gantry angles as approximations of the blocked signal and uses a weighted least square regression of these observations to finally determine the blocked signal. The method was first tested with a simulated SMOG on a head phantom. The signal to noise ratio (SNR), which represents the difference of the recovered CBCT image to the original image without the SMOG, was used to evaluate the ability of the IPSF in recovering the missing signal. The IPSF approach was then tested using a Catphan phantom on a prototype SMOG assembly installed in a bench top CBCT system. Results: In the simulated SMOG experiment, the SNRs were increased from 15.1 and 12.7 dB to 35.6 and 28.9 dB comparing with a conventional interpolation method (inpainting method) for a projection and the reconstructed 3D image, respectively, suggesting that IPSF successfully recovered most of blocked signal. In the prototype SMOG experiment, the authors have successfully reconstructed a CBCT image using the IPSF-SMOG approach. The detailed geometric features in the

  6. On the construction of a time base and the elimination of averaging errors in proxy records

    Science.gov (United States)

    Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.

    2009-04-01

    measured averaged proxy signal is modeled by following signal model: -- Δ ∫ n+12Δδ- y(n,θ) = δ- 1Δ- y(m,θ)dm n-2 δ where m is the position, x(m) = Δm; θ are the unknown parameters and y(m,θ) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, θ) = A +∑H [A sin(kωt(m ))+ A cos(kωt(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: ∑ g(m ) = b blφl(m ) l=1 where, b is a vector of unknown time base distortion parameters, and φ is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.

  7. Channel Estimation in DCT-Based OFDM

    Science.gov (United States)

    Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing

    2014-01-01

    This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439

  8. Online Wavelet Complementary velocity Estimator.

    Science.gov (United States)

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  10. Simple algorithm to estimate mean-field effects from minor differential permeability curves based on the Preisach model

    International Nuclear Information System (INIS)

    Perevertov, Oleksiy

    2003-01-01

    The classical Preisach model (PM) of magnetic hysteresis requires that any minor differential permeability curve lies under minor curves with larger field amplitude. Measurements of ferromagnetic materials show that very often this is not true. By applying the classical PM formalism to measured minor curves one can discover that it leads to an oval-shaped region on each half of the Preisach plane where the calculations produce negative values in the Preisach function. Introducing an effective field, which differs from the applied one by a mean-field term proportional to the magnetization, usually solves this problem. Complex techniques exist to estimate the minimum necessary proportionality constant (the moving parameter). In this paper we propose a simpler way to estimate the mean-field effects for use in nondestructive testing, which is based on experience from the measurements of industrial steels. A new parameter (parameter of shift) is introduced, which monitors the mean-field effects. The relation between the shift parameter and the moving one was studied for a number of steels. From preliminary experiments no correlation was found between the shift parameter and the classical magnetic ones such as the coercive field, maximum differential permeability and remanent magnetization

  11. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    Science.gov (United States)

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  12. Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations

    Science.gov (United States)

    Merckelbach, Lucas

    2016-12-01

    Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.

  13. Portfolio Value at Risk Estimate for Crude Oil Markets: A Multivariate Wavelet Denoising Approach

    Directory of Open Access Journals (Sweden)

    Kin Keung Lai

    2012-04-01

    Full Text Available In the increasingly globalized economy these days, the major crude oil markets worldwide are seeing higher level of integration, which results in higher level of dependency and transmission of risks among different markets. Thus the risk of the typical multi-asset crude oil portfolio is influenced by dynamic correlation among different assets, which has both normal and transient behaviors. This paper proposes a novel multivariate wavelet denoising based approach for estimating Portfolio Value at Risk (PVaR. The multivariate wavelet analysis is introduced to analyze the multi-scale behaviors of the correlation among different markets and the portfolio volatility behavior in the higher dimensional time scale domain. The heterogeneous data and noise behavior are addressed in the proposed multi-scale denoising based PVaR estimation algorithm, which also incorporatesthe mainstream time series to address other well known data features such as autocorrelation and volatility clustering. Empirical studies suggest that the proposed algorithm outperforms the benchmark ExponentialWeighted Moving Average (EWMA and DCC-GARCH model, in terms of conventional performance evaluation criteria for the model reliability.

  14. A New Approach to Image-Based Estimation of Food Volume

    Directory of Open Access Journals (Sweden)

    Hamid Hassannejad

    2017-06-01

    Full Text Available A balanced diet is the key to a healthy lifestyle and is crucial for preventing or dealing with many chronic diseases such as diabetes and obesity. Therefore, monitoring diet can be an effective way of improving people’s health. However, manual reporting of food intake has been shown to be inaccurate and often impractical. This paper presents a new approach to food intake quantity estimation using image-based modeling. The modeling method consists of three steps: firstly, a short video of the food is taken by the user’s smartphone. From such a video, six frames are selected based on the pictures’ viewpoints as determined by the smartphone’s orientation sensors. Secondly, the user marks one of the frames to seed an interactive segmentation algorithm. Segmentation is based on a Gaussian Mixture Model alongside the graph-cut algorithm. Finally, a customized image-based modeling algorithm generates a point-cloud to model the food. At the same time, a stochastic object-detection method locates a checkerboard used as size/ground reference. The modeling algorithm is optimized such that the use of six input images still results in an acceptable computation cost. In our evaluation procedure, we achieved an average accuracy of 92 % on a test set that includes images of different kinds of pasta and bread, with an average processing time of about 23 s.

  15. Estimating Seebeck Coefficient of a p-Type High Temperature Thermoelectric Material Using Bee Algorithm Multi-layer Perception

    Science.gov (United States)

    Uysal, Fatih; Kilinc, Enes; Kurt, Huseyin; Celik, Erdal; Dugenci, Muharrem; Sagiroglu, Selami

    2017-08-01

    Thermoelectric generators (TEGs) convert heat into electrical energy. These energy-conversion systems do not involve any moving parts and are made of thermoelectric (TE) elements connected electrically in a series and thermally in parallel; however, they are currently not suitable for use in regular operations due to their low efficiency levels. In order to produce high-efficiency TEGs, there is a need for highly heat-resistant thermoelectric materials (TEMs) with an improved figure of merit ( ZT). Production and test methods used for TEMs today are highly expensive. This study attempts to estimate the Seebeck coefficient of TEMs by using the values of existing materials in the literature. The estimation is made within an artificial neural network (ANN) based on the amount of doping and production methods. Results of the estimations show that the Seebeck coefficient can approximate the real values with an average accuracy of 94.4%. In addition, ANN has detected that any change in production methods is followed by a change in the Seebeck coefficient.

  16. Average effect estimates remain similar as evidence evolves from single trials to high-quality bodies of evidence: a meta-epidemiologic study.

    Science.gov (United States)

    Gartlehner, Gerald; Dobrescu, Andreea; Evans, Tammeka Swinson; Thaler, Kylie; Nussbaumer, Barbara; Sommer, Isolde; Lohr, Kathleen N

    2016-01-01

    The objective of our study was to use a diverse sample of medical interventions to assess empirically whether first trials rendered substantially different treatment effect estimates than reliable, high-quality bodies of evidence. We used a meta-epidemiologic study design using 100 randomly selected bodies of evidence from Cochrane reports that had been graded as high quality of evidence. To determine the concordance of effect estimates between first and subsequent trials, we applied both quantitative and qualitative approaches. For quantitative assessment, we used Lin's concordance correlation and calculated z-scores; to determine the magnitude of differences of treatment effects, we calculated standardized mean differences (SMDs) and ratios of relative risks. We determined qualitative concordance based on a two-tiered approach incorporating changes in statistical significance and magnitude of effect. First trials both overestimated and underestimated the true treatment effects in no discernible pattern. Nevertheless, depending on the definition of concordance, effect estimates of first trials were concordant with pooled subsequent studies in at least 33% but up to 50% of comparisons. The pooled magnitude of change as bodies of evidence advanced from single trials to high-quality bodies of evidence was 0.16 SMD [95% confidence interval (CI): 0.12, 0.21]. In 80% of comparisons, the difference in effect estimates was smaller than 0.5 SMDs. In first trials with large treatment effects (>0.5 SMD), however, estimates of effect substantially changed as new evidence accrued (mean change 0.68 SMD; 95% CI: 0.50, 0.86). Results of first trials often change, but the magnitude of change, on average, is small. Exceptions are first trials that present large treatment effects, which often dissipate as new evidence accrues. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  18. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    Science.gov (United States)

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  19. Novel Diagonal Reloading Based Direction of Arrival Estimation in Unknown Non-Uniform Noise

    Directory of Open Access Journals (Sweden)

    Hao Zhou

    2018-01-01

    Full Text Available Nested array can expand the degrees of freedom (DOF from difference coarray perspective, but suffering from the performance degradation of direction of arrival (DOA estimation in unknown non-uniform noise. In this paper, a novel diagonal reloading (DR based DOA estimation algorithm is proposed using a recently developed nested MIMO array. The elements in the main diagonal of the sample covariance matrix are eliminated; next the smallest MN-K eigenvalues of the revised matrix are obtained and averaged to estimate the sum value of the signal power. Further the estimated sum value is filled into the main diagonal of the revised matrix for estimating the signal covariance matrix. In this case, the negative effect of noise is eliminated without losing the useful information of the signal matrix. Besides, the degrees of freedom are expanded obviously, resulting in the performance improvement. Several simulations are conducted to demonstrate the effectiveness of the proposed algorithm.

  20. Research on measurement method of optical camouflage effect of moving object

    Science.gov (United States)

    Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen

    2016-10-01

    Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.

  1. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  2. Dog days of summer: Influences on decision of wolves to move pups

    Science.gov (United States)

    Ausband, David E.; Mitchell, Michael S.; Bassing, Sarah B.; Nordhagen, Matthew; Smith, Douglas W.; Stahler, Daniel R.

    2016-01-01

    For animals that forage widely, protecting young from predation can span relatively long time periods due to the inability of young to travel with and be protected by their parents. Moving relatively immobile young to improve access to important resources, limit detection of concentrated scent by predators, and decrease infestations by ectoparasites can be advantageous. Moving young, however, can also expose them to increased mortality risks (e.g., accidents, getting lost, predation). For group-living animals that live in variable environments and care for young over extended time periods, the influence of biotic factors (e.g., group size, predation risk) and abiotic factors (e.g., temperature and precipitation) on the decision to move young is unknown. We used data from 25 satellite-collared wolves ( Canis lupus ) in Idaho, Montana, and Yellowstone National Park to evaluate how these factors could influence the decision to move pups during the pup-rearing season. We hypothesized that litter size, the number of adults in a group, and perceived predation risk would positively affect the number of times gray wolves moved pups. We further hypothesized that wolves would move their pups more often when it was hot and dry to ensure sufficient access to water. Contrary to our hypothesis, monthly temperature above the 30-year average was negatively related to the number of times wolves moved their pups. Monthly precipitation above the 30-year average, however, was positively related to the amount of time wolves spent at pup-rearing sites after leaving the natal den. We found little relationship between risk of predation (by grizzly bears, humans, or conspecifics) or group and litter sizes and number of times wolves moved their pups. Our findings suggest that abiotic factors most strongly influence the decision of wolves to move pups, although responses to unpredictable biotic events (e.g., a predator encountering pups) cannot be ruled out.

  3. Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates.

    Directory of Open Access Journals (Sweden)

    Caroline A Curtis

    Full Text Available Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance.We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the 'plant characteristics' information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN, and tested whether ΔCN was influenced by growth form or range size.Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001. The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation.Our results show that distribution data are consistently broader than USDA PLANTS experts' knowledge and likely provide more robust estimates of climatic tolerance, especially for

  4. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  5. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  6. Mixed convection from a discrete heat source in enclosures with two adjacent moving walls and filled with micropolar nanofluids

    Directory of Open Access Journals (Sweden)

    Sameh E. Ahmed

    2016-03-01

    Full Text Available This paper examines numerically the thermal and flow field characteristics of the laminar steady mixed convection flow in a square lid-driven enclosure filled with water-based micropolar nanofluids by using the finite volume method. While a uniform heat source is located on a part of the bottom of the enclosure, both the right and left sidewalls are considered adiabatic together with the remaining parts of the bottom wall. The upper wall is maintained at a relatively low temperature. Both the upper and left sidewalls move at a uniform lid-driven velocity and four different cases of the moving lid ordinations are considered. The fluid inside the enclosure is a water based micropolar nanofluid containing different types of solid spherical nanoparticles: Cu, Ag, Al2O3, and TiO2. Based on the numerical results, the effects of the dominant parameters such as Richardson number, nanofluid type, length and location of the heat source, solid volume fractions, moving lid orientations and dimensionless viscosity are examined. Comparisons with previously numerical works are performed and good agreements between the results are observed. It is found that the average Nusselt number along the heat source decreases as the heat source length increases while it increases when the solid volume fraction increases. Also, the results of the present study indicate that both the local and the average Nusselt numbers along the heat source have the highest value for the fourth case (C4. Moreover, it is observed that both the Richardson number and moving lid ordinations have a significant effect on the flow and thermal fields in the enclosure.

  7. Attitude Estimation Based on the Spherical Simplex Transformation Modified Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jianwei Zhao

    2014-01-01

    Full Text Available An antenna attitude estimation algorithm is proposed to improve the antenna pointing accuracy for the satellite communication on-the-move. The extrapolated angular acceleration is adopted to improve the performance of the time response. The states of the system are modified according to the modification rules. The spherical simplex transformation unscented Kalman filter is used to improve the precision of the estimated attitude and decrease the calculation of the unscented Kalman filter. The experiment results show that the proposed algorithm can improve the instantaneity of the estimated attitude and the precision of the antenna pointing, which meets the requirement of the antenna pointing.

  8. Student-Centered Coaching: The Moves

    Science.gov (United States)

    Sweeney, Diane; Harris, Leanna S.

    2017-01-01

    Student-centered coaching is a highly-effective, evidence-based coaching model that shifts the focus from "fixing" teachers to collaborating with them to design instruction that targets student outcomes. But what does this look like in practice? "Student-Centered Coaching: The Moves" shows you the day-to-day coaching moves that…

  9. ANALISIS CURAH HUJAN DAN DEBIT MODEL SWAT DENGAN METODE MOVING AVERAGE DI DAS CILIWUNG HULU

    Directory of Open Access Journals (Sweden)

    Defri Satiya Zuma

    2017-09-01

    Full Text Available Watershed can be regarded as a hydrological system that has a function in transforming rainwater as an input into outputs such as flow and sediment. The transformation of inputs into outputs has specific forms and properties. The transformation involves many processes, including processes occurred on the surface of the land, river basins, in soil and aquifer. This study aimed to apply the SWAT model  in  Ciliwung Hulu Watershed, asses the effect of average rainfall  on 3 days, 5 days, 7 days and 10 days of the hydrological characteristics in Ciliwung Hulu Watershed. The correlation coefficient (r between rainfall and discharge was positive, it indicated that there was an unidirectional relationship between rainfall and discharge in the upstream, midstream and downstream of the watershed. The upper limit ratio of discharge had a downward trend from upstream to downstream, while the lower limit ratio of  discharge had an upward trend from upstream to downstream. It showed that the discharge peak in Ciliwung  Hulu Watershed from upstream to downstream had a downward trend while the baseflow from upstream to downstream had an upward trend. It showed that the upstream of Ciliwung Hulu Watershed had the highest ratio of discharge peak  and baseflow so it needs the soil and water conservations and technical civil measures. The discussion concluded that the SWAT model could be well applied in Ciliwung Hulu Watershed, the most affecting average rainfall on the hydrological characteristics was the average rainfall of 10 days. On average  rainfall of 10 days, all components had contributed maximally for river discharge.

  10. Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study.

    Science.gov (United States)

    Bellem, Hanna; Klüver, Malte; Schrauf, Michael; Schöner, Hans-Peter; Hecht, Heiko; Krems, Josef F

    2017-05-01

    To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.

  11. Columnar transmitter based wireless power delivery system for implantable device in freely moving animals.

    Science.gov (United States)

    Eom, Kyungsik; Jeong, Joonsoo; Lee, Tae Hyung; Lee, Sung Eun; Jun, Sang Bum; Kim, Sung June

    2013-01-01

    A wireless power delivery system is developed to deliver electrical power to the neuroprosthetic devices that are implanted into animals freely moving inside the cage. The wireless powering cage is designed for long-term animal experiments without cumbersome wires for power supply or the replacement of batteries. In the present study, we propose a novel wireless power transmission system using resonator-based inductive links to increase power efficiency and to minimize the efficiency variations. A columnar transmitter coil is proposed to provide lateral uniformity of power efficiency. Using this columnar transmitter coil, only 7.2% efficiency fluctuation occurs from the maximum transmission efficiency of 25.9%. A flexible polymer-based planar type receiver coil is fabricated and assembled with a neural stimulator and an electrode. Using the designed columnar transmitter coil, the implantable device successfully operates while it moves freely inside the cage.

  12. Reduced complexity FFT-based DOA and DOD estimation for moving target in bistatic MIMO radar

    KAUST Repository

    Ali, Hussain; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    classification (2D-MUSIC) and reduced-dimension MUSIC (RD-MUSIC) algorithms. It is shown by simulations, our proposed algorithm has better estimation performance and lower computational complexity compared to the 2D-MUSIC and RD-MUSIC algorithms. Moreover

  13. Equivalence-point electromigration acid-base titration via moving neutralization boundary electrophoresis.

    Science.gov (United States)

    Yang, Qing; Fan, Liu-Yin; Huang, Shan-Sheng; Zhang, Wei; Cao, Cheng-Xi

    2011-04-01

    In this paper, we developed a novel method of acid-base titration, viz. the electromigration acid-base titration (EABT), via a moving neutralization boundary (MNR). With HCl and NaOH as the model strong acid and base, respectively, we conducted the experiments on the EABT via the method of moving neutralization boundary for the first time. The experiments revealed that (i) the concentration of agarose gel, the voltage used and the content of background electrolyte (KCl) had evident influence on the boundary movement; (ii) the movement length was a function of the running time under the constant acid and base concentrations; and (iii) there was a good linearity between the length and natural logarithmic concentration of HCl under the optimized conditions, and the linearity could be used to detect the concentration of acid. The experiments further manifested that (i) the RSD values of intra-day and inter-day runs were less than 1.59 and 3.76%, respectively, indicating similar precision and stability in capillary electrophoresis or HPLC; (ii) the indicators with different pK(a) values had no obvious effect on EABT, distinguishing strong influence on the judgment of equivalence-point titration in the classic one; and (iii) the constant equivalence-point titration always existed in the EABT, rather than the classic volumetric analysis. Additionally, the EABT could be put to good use for the determination of actual acid concentrations. The experimental results achieved herein showed a new general guidance for the development of classic volumetric analysis and element (e.g. nitrogen) content analysis in protein chemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Commodity-based Approach for Evaluating the Value of Freight Moving on Texas’ Roadway Network

    Science.gov (United States)

    2017-12-10

    The researchers took a commodity-based approach to evaluate the value of a list of selected commodities moved on the Texas freight network. This approach takes advantage of commodity-specific data sources and modeling processes. It provides a unique ...

  15. A Lagrange-Eulerian formulation of an axially moving beam based on the absolute nodal coordinate formulation

    Energy Technology Data Exchange (ETDEWEB)

    Pechstein, Astrid, E-mail: astrid.pechstein@jku.at [Johannes Kepler University Linz, Institute of Technical Mechanics (Austria); Gerstmayr, Johannes, E-mail: johannes.gerstmayr@accm.co.at [Austrian Center of Competence in Mechatronics (Austria)

    2013-10-15

    In the scope of this paper, a finite-element formulation for an axially moving beam is presented. The beam element is based on the absolute nodal coordinate formulation, where position and slope vectors are used as degrees of freedom instead of rotational parameters. The equations of motion for an axially moving beam are derived from generalized Lagrange equations in a Lagrange-Eulerian sense. This procedure yields equations which can be implemented as a straightforward augmentation to the standard equations of motion for a Bernoulli-Euler beam. Moreover, a contact model for frictional contact between an axially moving strip and rotating rolls is presented. To show the efficiency of the method, simulations of a belt drive are presented.

  16. Estimating the costs of induced abortion in Uganda: A model-based analysis

    Science.gov (United States)

    2011-01-01

    Background The demand for induced abortions in Uganda is high despite legal and moral proscriptions. Abortion seekers usually go to illegal, hidden clinics where procedures are performed in unhygienic environments by under-trained practitioners. These abortions, which are usually unsafe, lead to a high rate of severe complications and use of substantial, scarce healthcare resources. This study was performed to estimate the costs associated with induced abortions in Uganda. Methods A decision tree was developed to represent the consequences of induced abortion and estimate the costs of an average case. Data were obtained from a primary chart abstraction study, an on-going prospective study, and the published literature. Societal costs, direct medical costs, direct non-medical costs, indirect (productivity) costs, costs to patients, and costs to the government were estimated. Monte Carlo simulation was used to account for uncertainty. Results The average societal cost per induced abortion (95% credibility range) was $177 ($140-$223). This is equivalent to $64 million in annual national costs. Of this, the average direct medical cost was $65 ($49-86) and the average direct non-medical cost was $19 ($16-$23). The average indirect cost was $92 ($57-$139). Patients incurred $62 ($46-$83) on average while government incurred $14 ($10-$20) on average. Conclusion Induced abortions are associated with substantial costs in Uganda and patients incur the bulk of the healthcare costs. This reinforces the case made by other researchers--that efforts by the government to reduce unsafe abortions by increasing contraceptive coverage or providing safe, legal abortions are critical. PMID:22145859

  17. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.

    Science.gov (United States)

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-12-11

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.

  18. Estimation of the proteomic cancer co-expression sub networks by using association estimators.

    Directory of Open Access Journals (Sweden)

    Cihat Erdoğan

    Full Text Available In this study, the association estimators, which have significant influences on the gene network inference methods and used for determining the molecular interactions, were examined within the co-expression network inference concept. By using the proteomic data from five different cancer types, the hub genes/proteins within the disease-associated gene-gene/protein-protein interaction sub networks were identified. Proteomic data from various cancer types is collected from The Cancer Proteome Atlas (TCPA. Correlation and mutual information (MI based nine association estimators that are commonly used in the literature, were compared in this study. As the gold standard to measure the association estimators' performance, a multi-layer data integration platform on gene-disease associations (DisGeNET and the Molecular Signatures Database (MSigDB was used. Fisher's exact test was used to evaluate the performance of the association estimators by comparing the created co-expression networks with the disease-associated pathways. It was observed that the MI based estimators provided more successful results than the Pearson and Spearman correlation approaches, which are used in the estimation of biological networks in the weighted correlation network analysis (WGCNA package. In correlation-based methods, the best average success rate for five cancer types was 60%, while in MI-based methods the average success ratio was 71% for James-Stein Shrinkage (Shrink and 64% for Schurmann-Grassberger (SG association estimator, respectively. Moreover, the hub genes and the inferred sub networks are presented for the consideration of researchers and experimentalists.

  19. On critical cases in limit theory for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Podolskij, Mark

    averages. The limit theory heavily depends on the interplay between the given order of the increments, the considered power, the Blumenthal-Getoor index of the driving pure jump Lévy process L and the behavior of the kernel function g at 0. In this work we will study the critical cases, which were...

  20. Rocket Based Combined Cycle Exchange Inlet Performance Estimation at Supersonic Speeds

    Science.gov (United States)

    Murzionak, Aliaksandr

    A method to estimate the performance of an exchange inlet for a Rocket Based Combined Cycle engine is developed. This method is to be used for exchange inlet geometry optimization and as such should be able to predict properties that can be used in the design process within a reasonable amount of time to allow multiple configurations to be evaluated. The method is based on a curve fit of the shocks developed around the major components of the inlet using solutions for shocks around sharp cones and 2D estimations of the shocks around wedges with blunt leading edges. The total pressure drop across the estimated shocks as well as the mass flow rate through the exchange inlet are calculated. The estimations for a selected range of free-stream Mach numbers between 1.1 and 7 are compared against numerical finite volume method simulations which were performed using available commercial software (Ansys-CFX). The total pressure difference between the two methods is within 10% for the tested Mach numbers of 5 and below, while for the Mach 7 test case the difference is 30%. The mass flow rate on average differs by less than 5% for all tested cases with the maximum difference not exceeding 10%. The estimation method takes less than 3 seconds on 3.0 GHz single core processor to complete the calculations for a single flight condition as oppose to over 5 days on 8 cores at 2.4 GHz system while using 3D finite volume method simulation with 1.5 million elements mesh. This makes the estimation method suitable for the use with exchange inlet geometry optimization algorithm.

  1. SEM based CARMA time series modeling for arbitrary N

    NARCIS (Netherlands)

    Oud, J.H.L.; Völkle, M.C.; Driver, C.C.

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be

  2. Canada’s 2010 Tax Competitiveness Ranking: Moving to the Average but Biased Against Services

    Directory of Open Access Journals (Sweden)

    Duanjie Chen

    2011-02-01

    Full Text Available For the first time since 1975 (the year Canada’s marginal effective tax rates were first measured, Canada has become the most tax-competitive country among G-7 states with respect to taxation of capital investment. Even more remarkably, Canada accomplished this feat within a mere six years, having previously been the least taxcompetitive G-7 member. Even in comparison to strongly growing emerging economies, Canada’s 2010 marginal effective tax rate on capital is still above average. The planned reductions in federal and provincial corporate taxes by 2013 will reduce Canada’s effective tax rate on new investments to 18.4 percent, below the Organization for Economic Co-operation and Development (OECD 2010 average and close to the average of the 50 non-OECD countries studied. This remarkable change in Canada’s tax competitiveness must be maintained in the coming years, as countries are continually reducing their business taxation despite the recent fiscal pressures arising from the 2008-9 downturn in the world economy. Many countries have forged ahead with significant reforms designed to increase tax competitiveness and improve tax neutrality including Greece, Israel, Japan, New Zealand, Taiwan and the United Kingdom. The continuing bias in Canada’s corporate income tax structure favouring manufacturing and processing business warrants close scrutiny. Measured by the difference between the marginal effective tax rate on capital between manufacturing and the broad range of service sectors, Canada has the greatest gap in tax burdens between manufacturing and services among OECD countries. Surprisingly, preferential tax treatment (such as fast write-off and investment tax credits favouring only manufacturing and processing activities has become the norm in Canada, although it does not exist in most developed economies.

  3. Error Analysis of Fast Moving Target Geo-location in Wide Area Surveillance Ground Moving Target Indication Mode

    Directory of Open Access Journals (Sweden)

    Zheng Shi-chao

    2013-12-01

    Full Text Available As an important mode in airborne radar systems, Wide Area Surveillance Ground Moving Target Indication (WAS-GMTI mode has the ability of monitoring a large area in a short time, and then the detected moving targets can be located quickly. However, in real environment, many factors introduce considerable errors into the location of moving targets. In this paper, a fast location method based on the characteristics of the moving targets in WAS-GMTI mode is utilized. And in order to improve the location performance, those factors that introduce location errors are analyzed and moving targets are relocated. Finally, the analysis of those factors is proved to be reasonable by simulation and real data experiments.

  4. View Estimation Based on Value System

    Science.gov (United States)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  5. Sunshine-based estimation of global solar radiation on horizontal surface at Lake Van region (Turkey)

    International Nuclear Information System (INIS)

    Duzen, Hacer; Aydin, Harun

    2012-01-01

    Highlights: ► The global solar radiation at Lake Van region is estimated. ► This study is unique for the Lake Van region. ► Solar radiation around Lake Van has the highest value at the east-southeast region. ► The annual average solar energy potential is obtained as 750–2458 kWh/m 2 . ► Results can be used to estimate evaporation. - Abstract: In this study several sunshine-based regression models have been evaluated to estimate monthly average daily global solar radiation on horizontal surface of Lake Van region in the Eastern Anatolia region in Turkey by using data obtained from seven different meteorological stations. These models are derived from Angström–Prescott linear regression model and its derivatives such as quadratic, cubic, logarithmic and exponential. The performance of this regression models were evaluated by comparing the calculated clearness index and the measured clearness index. Several statistical tests were used to control the validation and goodness of the regression models in terms of the coefficient of determination, mean percent error, mean absolute percent error, mean biased error, mean absolute biased error, root mean square error and t-statistic. The results of all the regression models are within acceptable limits according to the statistical tests. However, the best performances are obtained by cubic regression model for Bitlis, Gevaş, Hakkari, Muş stations and by quadratic regression model for Malazgirt, Tatvan and Van stations to predict global solar radiation. The spatial distributions of the monthly average daily global solar radiation around the Lake Van region were obtained with interpolation of calculated solar radiation data that acquired from best fit models of the stations. The annual average solar energy potential for Lake Van region is obtained between 750 kWh/m 2 and 2485 kWh/m 2 with annual average of 1610 kWh/m 2 .

  6. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-11-09

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  7. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-01-08

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  8. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong; Sundaramoorthi, Ganesh

    2017-01-01

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  9. Estimation of tool wear during CNC milling using neural network-based sensor fusion

    Science.gov (United States)

    Ghosh, N.; Ravi, Y. B.; Patra, A.; Mukhopadhyay, S.; Paul, S.; Mohanty, A. R.; Chattopadhyay, A. B.

    2007-01-01

    Cutting tool wear degrades the product quality in manufacturing processes. Monitoring tool wear value online is therefore needed to prevent degradation in machining quality. Unfortunately there is no direct way of measuring the tool wear online. Therefore one has to adopt an indirect method wherein the tool wear is estimated from several sensors measuring related process variables. In this work, a neural network-based sensor fusion model has been developed for tool condition monitoring (TCM). Features extracted from a number of machining zone signals, namely cutting forces, spindle vibration, spindle current, and sound pressure level have been fused to estimate the average flank wear of the main cutting edge. Novel strategies such as, signal level segmentation for temporal registration, feature space filtering, outlier removal, and estimation space filtering have been proposed. The proposed approach has been validated by both laboratory and industrial implementations.

  10. UD-WCMA: An Energy Estimation and Forecast Scheme for Solar Powered Wireless Sensor Networks

    KAUST Repository

    Dehwah, Ahmad H.

    2017-04-11

    Energy estimation and forecast represents an important role for energy management in solar-powered wireless sensor networks (WSNs). In general, the energy in such networks is managed over a finite time horizon in the future based on input solar power forecasts to enable continuous operation of the WSNs and achieve the sensing objectives while ensuring that no node runs out of energy. In this article, we propose a dynamic version of the weather conditioned moving average technique (UD-WCMA) to estimate and predict the variations of the solar power in a wireless sensor network. The presented approach combines the information from the real-time measurement data and a set of stored profiles representing the energy patterns in the WSNs location to update the prediction model. The UD-WCMA scheme is based on adaptive weighting parameters depending on the weather changes which makes it flexible compared to the existing estimation schemes without any precalibration. A performance analysis has been performed considering real irradiance profiles to assess the UD-WCMA prediction accuracy. Comparative numerical tests to standard forecasting schemes (EWMA, WCMA, and Pro-Energy) shows the outperformance of the new algorithm. The experimental validation has proven the interesting features of the UD-WCMA in real time low power sensor nodes.

  11. Improved frame-based estimation of head motion in PET brain imaging

    International Nuclear Information System (INIS)

    Mukherjee, J. M.; Lindsay, C.; King, M. A.; Licho, R.; Mukherjee, A.; Olivier, P.; Shao, L.

    2016-01-01

    Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is

  12. Improved frame-based estimation of head motion in PET brain imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mukherjee, J. M., E-mail: joyeeta.mitra@umassmed.edu; Lindsay, C.; King, M. A.; Licho, R. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Mukherjee, A. [Aware, Inc., Bedford, Massachusetts 01730 (United States); Olivier, P. [Philips Medical Systems, Cleveland, Ohio 44143 (United States); Shao, L. [ViewRay, Oakwood Village, Ohio 44146 (United States)

    2016-05-15

    Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is

  13. Improving satellite-based post-fire evapotranspiration estimates in semi-arid regions

    Science.gov (United States)

    Poon, P.; Kinoshita, A. M.

    2017-12-01

    Climate change and anthropogenic factors contribute to the increased frequency, duration, and size of wildfires, which can alter ecosystem and hydrological processes. The loss of vegetation canopy and ground cover reduces interception and alters evapotranspiration (ET) dynamics in riparian areas, which can impact rainfall-runoff partitioning. Previous research evaluated the spatial and temporal trends of ET based on burn severity and observed an annual decrease of 120 mm on average for three years after fire. Building upon these results, this research focuses on the Coyote Fire in San Diego, California (USA), which burned a total of 76 km2 in 2003 to calibrate and improve satellite-based ET estimates in semi-arid regions affected by wildfire. The current work utilizes satellite-based products and techniques such as the Google Earth Engine Application programming interface (API). Various ET models (ie. Operational Simplified Surface Energy Balance Model (SSEBop)) are compared to the latent heat flux from two AmeriFlux eddy covariance towers, Sky Oaks Young (US-SO3), and Old Stand (US-SO2), from 2000 - 2015. The Old Stand tower has a low burn severity and the Young Stand tower has a moderate to high burn severity. Both towers are used to validate spatial ET estimates. Furthermore, variables and indices, such as Enhanced Vegetation Index (EVI), Normalized Difference Moisture Index (NDMI), and the Normalized Burn Ratio (NBR) are utilized to evaluate satellite-based ET through a multivariate statistical analysis at both sites. This point-scale study will able to improve ET estimates in spatially diverse regions. Results from this research will contribute to the development of a post-wildfire ET model for semi-arid regions. Accurate estimates of post-fire ET will provide a better representation of vegetation and hydrologic recovery, which can be used to improve hydrologic models and predictions.

  14. Convolution-based estimation of organ dose in tube current modulated CT

    Science.gov (United States)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the convolution-based organ dose estimation method with two other strategies with different approaches of quantifying the irradiation field. The proposed convolution-based estimation method showed good accuracy with the organ dose simulated using the TCM Monte Carlo simulation. The average percentage error (normalized by CTDIvol) was generally within 10% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. This study developed an improved method that accurately quantifies the irradiation field under TCM scans. The results suggested that organ dose could be estimated in real-time both prospectively (with the localizer information only) and retrospectively (with acquired CT data).

  15. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Observer-Based Human Knee Stiffness Estimation.

    Science.gov (United States)

    Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen

    2017-05-01

    We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.

  17. Tree-level imputation techniques to estimate current plot-level attributes in the Pacific Northwest using paneled inventory data

    Science.gov (United States)

    Bianca Eskelson; Temesgen Hailemariam; Tara Barrett

    2009-01-01

    The Forest Inventory and Analysis program (FIA) of the US Forest Service conducts a nationwide annual inventory. One panel (20% or 10% of all plots in the eastern and western United States, respectively) is measured each year. The precision of the estimates for any given year from one panel is low, and the moving average (MA), which is considered to be the default...

  18. Estimation of inhaled airborne particle number concentration by subway users in Seoul, Korea

    International Nuclear Information System (INIS)

    Kim, Minhae; Park, Sechan; Namgung, Hyeong-Gyu; Kwon, Soon-Bark

    2017-01-01

    Exposure to airborne particulate matter (PM) causes several diseases in the human body. The smaller particles, which have relatively large surface areas, are actually more harmful to the human body since they can penetrate deeper parts of the lungs or become secondary pollutants by bonding with other atmospheric pollutants, such as nitrogen oxides. The purpose of this study is to present the number of PM inhaled by subway users as a possible reference material for any analysis of the hazards to the human body arising from the inhalation of such PM. Two transfer stations in Seoul, Korea, which have the greatest number of users, were selected for this study. For 0.3–0.422 μm PM, particle number concentration (PNC) was highest outdoors but decreased as the tester moved deeper underground. On the other hand, the PNC between 1 and 10 μm increased as the tester moved deeper underground and showed a high number concentration inside the subway train as well. An analysis of the particles to which subway users are actually exposed to (inhaled particle number), using particle concentration at each measurement location, the average inhalation rate of an adult, and the average stay time at each location, all showed that particles sized 0.01–0.422 μm are mostly inhaled from the outdoor air whereas particles sized 1–10 μm are inhaled as the passengers move deeper underground. Based on these findings, we expect that the inhaled particle number of subway users can be used as reference data for an evaluation of the hazards to health caused by PM inhalation. - Highlights: • Size-dependent aerosol number was measured along the path of subway user. • Particles less than 0.4 μm were inhaled in outdoor but less so as deeper underground. • Coarse particles were inhaled significantly as users moved deeper underground. - We estimated the inhaled aerosol number concentration depending on particle size along the path of subway users.

  19. Rearrangement moves on rooted phylogenetic networks.

    Science.gov (United States)

    Gambette, Philippe; van Iersel, Leo; Jones, Mark; Lafond, Manuel; Pardi, Fabio; Scornavacca, Celine

    2017-08-01

    Phylogenetic tree reconstruction is usually done by local search heuristics that explore the space of the possible tree topologies via simple rearrangements of their structure. Tree rearrangement heuristics have been used in combination with practically all optimization criteria in use, from maximum likelihood and parsimony to distance-based principles, and in a Bayesian context. Their basic components are rearrangement moves that specify all possible ways of generating alternative phylogenies from a given one, and whose fundamental property is to be able to transform, by repeated application, any phylogeny into any other phylogeny. Despite their long tradition in tree-based phylogenetics, very little research has gone into studying similar rearrangement operations for phylogenetic network-that is, phylogenies explicitly representing scenarios that include reticulate events such as hybridization, horizontal gene transfer, population admixture, and recombination. To fill this gap, we propose "horizontal" moves that ensure that every network of a certain complexity can be reached from any other network of the same complexity, and "vertical" moves that ensure reachability between networks of different complexities. When applied to phylogenetic trees, our horizontal moves-named rNNI and rSPR-reduce to the best-known moves on rooted phylogenetic trees, nearest-neighbor interchange and rooted subtree pruning and regrafting. Besides a number of reachability results-separating the contributions of horizontal and vertical moves-we prove that rNNI moves are local versions of rSPR moves, and provide bounds on the sizes of the rNNI neighborhoods. The paper focuses on the most biologically meaningful versions of phylogenetic networks, where edges are oriented and reticulation events clearly identified. Moreover, our rearrangement moves are robust to the fact that networks with higher complexity usually allow a better fit with the data. Our goal is to provide a solid basis for

  20. Rearrangement moves on rooted phylogenetic networks.

    Directory of Open Access Journals (Sweden)

    Philippe Gambette

    2017-08-01

    Full Text Available Phylogenetic tree reconstruction is usually done by local search heuristics that explore the space of the possible tree topologies via simple rearrangements of their structure. Tree rearrangement heuristics have been used in combination with practically all optimization criteria in use, from maximum likelihood and parsimony to distance-based principles, and in a Bayesian context. Their basic components are rearrangement moves that specify all possible ways of generating alternative phylogenies from a given one, and whose fundamental property is to be able to transform, by repeated application, any phylogeny into any other phylogeny. Despite their long tradition in tree-based phylogenetics, very little research has gone into studying similar rearrangement operations for phylogenetic network-that is, phylogenies explicitly representing scenarios that include reticulate events such as hybridization, horizontal gene transfer, population admixture, and recombination. To fill this gap, we propose "horizontal" moves that ensure that every network of a certain complexity can be reached from any other network of the same complexity, and "vertical" moves that ensure reachability between networks of different complexities. When applied to phylogenetic trees, our horizontal moves-named rNNI and rSPR-reduce to the best-known moves on rooted phylogenetic trees, nearest-neighbor interchange and rooted subtree pruning and regrafting. Besides a number of reachability results-separating the contributions of horizontal and vertical moves-we prove that rNNI moves are local versions of rSPR moves, and provide bounds on the sizes of the rNNI neighborhoods. The paper focuses on the most biologically meaningful versions of phylogenetic networks, where edges are oriented and reticulation events clearly identified. Moreover, our rearrangement moves are robust to the fact that networks with higher complexity usually allow a better fit with the data. Our goal is to provide

  1. Determination of hydrologic properties needed to calculate average linear velocity and travel time of ground water in the principal aquifer underlying the southeastern part of Salt Lake Valley, Utah

    Science.gov (United States)

    Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.

    1994-01-01

    be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum

  2. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    Science.gov (United States)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  3. Line-averaging measurement methods to estimate the gap in the CO2 balance closure - possibilities, challenges, and uncertainties

    Science.gov (United States)

    Ziemann, Astrid; Starke, Manuela; Schütze, Claudia

    2017-11-01

    An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single

  4. Estimated average annual rate of change of CD4(+) T-cell counts in patients on combination antiretroviral therapy

    DEFF Research Database (Denmark)

    Mocroft, Amanda; Phillips, Andrew N; Ledergerber, Bruno

    2010-01-01

    BACKGROUND: Patients receiving combination antiretroviral therapy (cART) might continue treatment with a virologically failing regimen. We sought to identify annual change in CD4(+) T-cell count according to levels of viraemia in patients on cART. METHODS: A total of 111,371 CD4(+) T-cell counts...... and viral load measurements in 8,227 patients were analysed. Annual change in CD4(+) T-cell numbers was estimated using mixed models. RESULTS: After adjustment, the estimated average annual change in CD4(+) T-cell count significantly increased when viral load was cells/mm(3), 95......% confidence interval [CI] 26.6-34.3), was stable when viral load was 500-9,999 copies/ml (3.1 cells/mm(3), 95% CI -5.3-11.5) and decreased when viral load was >/=10,000 copies/ml (-14.8 cells/mm(3), 95% CI -4.5--25.1). Patients taking a boosted protease inhibitor (PI) regimen had more positive annual CD4(+) T-cell...

  5. Study on moving target detection to passive radar based on FM broadcast transmitter

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Target detection by a noncooperative illuminator is a topic of general interest in the electronic warfare field.First of all,direct-path interference(DPI)suppression which is the technique of bottleneck of moving target detection by a noncooperative frequency modulation(FM) broadcast transmitter is analyzed in this article;Secondly,a space-time-frequency domain synthetic solution to this problem is introduced:Adaptive nulling array processing is considered in the space domain,DPI cancellation based on adaptive fractional delay interpolation(AFDI)technique is used in planned time domain,and long-time coherent integration is utilized in the frequency domain;Finally,an experimental system is planned by considering FM broadcast transmitter as a noncooperative illuminator,Simulation results by real collected data show that the proposed method has a better performance of moving target detection.

  6. Sedimentological time-averaging and 14C dating of marine shells

    International Nuclear Information System (INIS)

    Fujiwara, Osamu; Kamataki, Takanobu; Masuda, Fujio

    2004-01-01

    The radiocarbon dating of sediments using marine shells involves uncertainties due to the mixed ages of the shells mainly attributed to depositional processes also known as 'sedimentological time-averaging'. This stratigraphic disorder can be removed by selecting the well-preserved indigenous shells based on ecological and taphonomic criteria. These criteria on sample selection are recommended for accurate estimation of the depositional age of geologic strata from 14 C dating of marine shells

  7. Development of deformable moving lung phantom to simulate respiratory motion in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jina [Department of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul 137-701 (Korea, Republic of); Lee, Youngkyu [Department of Radiation Oncology, Seoul St. Mary' s Hospital, College of Medicine, The Catholic University of Korea, 137-701, Seoul (Korea, Republic of); Shin, Hunjoo [Department of Radiation Oncology, Inchoen St. Mary' s Hospital College of Medicine, The Catholic University of Korea, Incheon 403-720 (Korea, Republic of); Ji, Sanghoon [Field Robot R& D Group, Korea Institute of Industrial Technology, Ansan 426-910 (Korea, Republic of); Park, Sungkwang [Department of Radiation Oncology, Busan Paik Hospital, Inje University, Busan 614-735 (Korea, Republic of); Kim, Jinyoung [Department of Radiation Oncology, Haeundae Paik Hospital, Inje University, Busan 612-896 (Korea, Republic of); Jang, Hongseok [Department of Radiation Oncology, Seoul St. Mary' s Hospital, College of Medicine, The Catholic University of Korea, 137-701, Seoul (Korea, Republic of); Kang, Youngnam, E-mail: ynkang33@gmail.com [Department of Radiation Oncology, Seoul St. Mary' s Hospital, College of Medicine, The Catholic University of Korea, 137-701, Seoul (Korea, Republic of)

    2016-07-01

    Radiation treatment requires high accuracy to protect healthy organs and destroy the tumor. However, tumors located near the diaphragm constantly move during treatment. Respiration-gated radiotherapy has significant potential for the improvement of the irradiation of tumor sites affected by respiratory motion, such as lung and liver tumors. To measure and minimize the effects of respiratory motion, a realistic deformable phantom is required for use as a gold standard. The purpose of this study was to develop and study the characteristics of a deformable moving lung (DML) phantom, such as simulation, tissue equivalence, and rate of deformation. The rate of change of the lung volume, target deformation, and respiratory signals were measured in this study; they were accurately measured using a realistic deformable phantom. The measured volume difference was 31%, which closely corresponds to the average difference in human respiration, and the target movement was − 30 to + 32 mm. The measured signals accurately described human respiratory signals. This DML phantom would be useful for the estimation of deformable image registration and in respiration-gated radiotherapy. This study shows that the developed DML phantom can exactly simulate the patient's respiratory signal and it acts as a deformable 4-dimensional simulation of a patient's lung with sufficient volume change.

  8. Development of deformable moving lung phantom to simulate respiratory motion in radiotherapy

    International Nuclear Information System (INIS)

    Kim, Jina; Lee, Youngkyu; Shin, Hunjoo; Ji, Sanghoon; Park, Sungkwang; Kim, Jinyoung; Jang, Hongseok; Kang, Youngnam

    2016-01-01

    Radiation treatment requires high accuracy to protect healthy organs and destroy the tumor. However, tumors located near the diaphragm constantly move during treatment. Respiration-gated radiotherapy has significant potential for the improvement of the irradiation of tumor sites affected by respiratory motion, such as lung and liver tumors. To measure and minimize the effects of respiratory motion, a realistic deformable phantom is required for use as a gold standard. The purpose of this study was to develop and study the characteristics of a deformable moving lung (DML) phantom, such as simulation, tissue equivalence, and rate of deformation. The rate of change of the lung volume, target deformation, and respiratory signals were measured in this study; they were accurately measured using a realistic deformable phantom. The measured volume difference was 31%, which closely corresponds to the average difference in human respiration, and the target movement was − 30 to + 32 mm. The measured signals accurately described human respiratory signals. This DML phantom would be useful for the estimation of deformable image registration and in respiration-gated radiotherapy. This study shows that the developed DML phantom can exactly simulate the patient's respiratory signal and it acts as a deformable 4-dimensional simulation of a patient's lung with sufficient volume change.

  9. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Winn, W.G.

    1994-01-01

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  10. The First Result of Relative Positioning and Velocity Estimation Based on CAPS

    Science.gov (United States)

    Zhao, Jiaojiao; Ge, Jian; Wang, Liang; Wang, Ningbo; Zhou, Kai; Yuan, Hong

    2018-01-01

    The Chinese Area Positioning System (CAPS) is a new positioning system developed by the Chinese Academy of Sciences based on the communication satellites in geosynchronous orbit. The CAPS has been regarded as a pilot system to test the new technology for the design, construction and update of the BeiDou Navigation Satellite System (BDS). The system structure of CAPS, including the space, ground control station and user segments, is almost like the traditional Global Navigation Satellite Systems (GNSSs), but with the clock on the ground, the navigation signal in C waveband, and different principles of operation. The major difference is that the CAPS navigation signal is first generated at the ground control station, before being transmitted to the satellite in orbit and finally forwarded by the communication satellite transponder to the user. This design moves the clock from the satellite in orbit to the ground. The clock error can therefore be easily controlled and mitigated to improve the positioning accuracy. This paper will present the performance of CAPS-based relative positioning and velocity estimation as assessed in Beijing, China. The numerical results show that, (1) the accuracies of relative positioning, using only code measurements, are 1.25 and 1.8 m in the horizontal and vertical components, respectively; (2) meanwhile, they are about 2.83 and 3.15 cm in static mode and 6.31 and 10.78 cm in kinematic mode, respectively, when using the carrier-phase measurements with ambiguities fixed; and (3) the accuracy of the velocity estimation is about 0.04 and 0.11 m/s in static and kinematic modes, respectively. These results indicate the potential application of CAPS for high-precision positioning and velocity estimation and the availability of a new navigation mode based on communication satellites. PMID:29757204

  11. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    Energy Technology Data Exchange (ETDEWEB)

    Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Santos-Villalobos, Hector J [ORNL; Thompson, Joseph W [ORNL; Bolme, David S [ORNL; Boehnen, Chris Bensing [ORNL

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  12. PDV-based estimation of ejecta particles' mass-velocity function from shock-loaded tin experiment

    Science.gov (United States)

    Franzkowiak, J.-E.; Prudhomme, G.; Mercier, P.; Lauriot, S.; Dubreuil, E.; Berthe, L.

    2018-03-01

    A metallic tin plate with a given surface finish of wavelength λ ≃ 60 μm and amplitude h ≃ 8 μm is explosively driven by an electro-detonator with a shock-induced breakout pressure PSB = 28 GPa (unsupported). The resulting dynamic fragmentation process, the so-called "micro-jetting," is the creation of high-speed jets of matter moving faster than the bulk metallic surface. Hydrodynamic instabilities result in the fragmentation of these jets into micron-sized metallic particles constituting a self-expanding cloud of droplets, whose areal mass, velocity, and particle size distributions are unknown. Lithium-niobate-piezoelectric sensor measured areal mass and Photonic Doppler Velocimetry (PDV) was used to get a time-velocity spectrogram of the cloud. In this article, we present both experimental mass and velocity results and we relate the integrated areal mass of the cloud to the PDV power spectral density with the assumption of a power law particle size distribution. Two models of PDV spectrograms are described. The first one accounts for the speckle statistics of the spectrum and the second one describes an average spectrum for which speckle fluctuations are removed. Finally, the second model is used for a maximum likelihood estimation of the cloud's parameters from PDV data. The estimated integrated areal mass from PDV data is found to agree well with piezoelectric results. We highlight the relevance of analyzing PDV data and correlating different diagnostics to retrieve the physical properties of ejecta particles.

  13. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  14. Effect of moving distance of temperature distribution on thermal ratchetting behavior of a FBR reactor vessel

    International Nuclear Information System (INIS)

    Ueta, Masahiro; Douzaki, Kouji; Takahashi, Yukio; Ooka, Yuji; Osaki, Toshio; Take, Kouji.

    1992-01-01

    It should be considered in a FBR reactor vessel design that thermal ratchetting might be caused by moving axial thermal gradient, in other words, moving sodium level. The behavior and the mechanism of ratchetting have almost become clear by studies for the past several years. A simplified evaluation method for ratchetting behavior has been proposed. However, the evaluation method has been shown to be excessively conservative by testing results. In this paper, the effect of moving distance of axial temperature distributions, which is one of main factors to be considered in precise estimation of ratchetting behavior, is studied by inelastic analyses. Based on the study, it is proposed to introduce a strain reducing factor taking account of residual stresses in the region of moving axial temperature distribution to the original evaluation method. The new method has been validated by comparing the prediction with results of both testing and the original method. (author)

  15. Kinetic parametric estimation in animal PET molecular imaging based on artificial immune network

    International Nuclear Information System (INIS)

    Chen Yuting; Ding Hong; Lu Rui; Huang Hongbo; Liu Li

    2011-01-01

    Objective: To develop an accurate,reliable method without the need of initialization in animal PET modeling for estimation of the tracer kinetic parameters based on the artificial immune network. Methods: The hepatic and left ventricular time activity curves (TACs) were obtained by drawing ROIs of liver tissue and left ventricle on dynamic 18 F-FDG PET imaging of small mice. Meanwhile, the blood TAC was analyzed by sampling the tail vein blood at different time points after injection. The artificial immune network for parametric optimization of pharmacokinetics (PKAIN) was adapted to estimate the model parameters and the metabolic rate of glucose (K i ) was calculated. Results: TACs of liver,left ventricle and tail vein blood were obtained.Based on the artificial immune network, K i in 3 mice was estimated as 0.0024, 0.0417 and 0.0047, respectively. The average weighted residual sum of squares of the output model generated by PKAIN was less than 0.0745 with a maximum standard deviation of 0.0084, which indicated that the proposed PKAIN method can provide accurate and reliable parametric estimation. Conclusion: The PKAIN method could provide accurate and reliable tracer kinetic modeling in animal PET imaging without the need of initialization of model parameters. (authors)

  16. Reduction of Averaging Time for Evaluation of Human Exposure to Radiofrequency Electromagnetic Fields from Cellular Base Stations

    Science.gov (United States)

    Kim, Byung Chan; Park, Seong-Ook

    In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

  17. Crowd density estimation based on convolutional neural networks with mixed pooling

    Science.gov (United States)

    Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming

    2017-09-01

    Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.

  18. Strong solutions for the Navier-Stokes equations on bounded and unbounded domains with a moving boundary

    Directory of Open Access Journals (Sweden)

    Juergen Saal

    2007-02-01

    Full Text Available It is proved under mild regularity assumptions on the data that the Navier-Stokes equations in bounded and unbounded noncylindrical regions admit a unique local-in-time strong solution. The result is based on maximal regularity estimates for the in spatial regions with a moving boundary obtained in [16] and the contraction mapping principle.

  19. Moving force identification based on modified preconditioned conjugate gradient method

    Science.gov (United States)

    Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy

    2018-06-01

    This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.

  20. 7 CFR 5.5 - Publication of season average, calendar year, and parity price data.

    Science.gov (United States)

    2010-01-01

    ... cases where preliminary marketing season average price data are used in estimating the adjusted base... Statistics Service, after consultation with the Agricultural Marketing Service, the Farm Service Agency, and... parity price data. 5.5 Section 5.5 Agriculture Office of the Secretary of Agriculture DETERMINATION OF...

  1. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  2. An energy estimation framework for event-based methods in Non-Intrusive Load Monitoring

    International Nuclear Information System (INIS)

    Giri, Suman; Bergés, Mario

    2015-01-01

    Highlights: • Energy estimation is NILM has not yet accounted for complexity of appliance models. • We present a data-driven framework for appliance modeling in supervised NILM. • We test the framework on 3 houses and report average accuracies of 5.9–22.4%. • Appliance models facilitate the estimation of energy consumed by the appliance. - Abstract: Non-Intrusive Load Monitoring (NILM) is a set of techniques used to estimate the electricity consumed by individual appliances in a building from measurements of the total electrical consumption. Most commonly, NILM works by first attributing any significant change in the total power consumption (also known as an event) to a specific load and subsequently using these attributions (i.e. the labels for the events) to estimate energy for each load. For this last step, most published work in the field makes simplifying assumptions to make the problem more tractable. In this paper, we present a framework for creating appliance models based on classification labels and aggregate power measurements that can help to relax many of these assumptions. Our framework automatically builds models for appliances to perform energy estimation. The model relies on feature extraction, clustering via affinity propagation, perturbation of extracted states to ensure that they mimic appliance behavior, creation of finite state models, correction of any errors in classification that might violate the model, and estimation of energy based on corrected labels. We evaluate our framework on 3 houses from standard datasets in the field and show that the framework can learn data-driven models based on event labels and use that to estimate energy with lower error margins (e.g., 1.1–42.3%) than when using the heuristic models used by others

  3. An indirect adaptive neural control of a visual-based quadrotor robot for pursuing a moving target.

    Science.gov (United States)

    Shirzadeh, Masoud; Amirkhani, Abdollah; Jalali, Aliakbar; Mosavi, Mohammad R

    2015-11-01

    This paper aims to use a visual-based control mechanism to control a quadrotor type aerial robot which is in pursuit of a moving target. The nonlinear nature of a quadrotor, on the one hand, and the difficulty of obtaining an exact model for it, on the other hand, constitute two serious challenges in designing a controller for this UAV. A potential solution for such problems is the use of intelligent control methods such as those that rely on artificial neural networks and other similar approaches. In addition to the two mentioned problems, another problem that emerges due to the moving nature of a target is the uncertainty that exists in the target image. By employing an artificial neural network with a Radial Basis Function (RBF) an indirect adaptive neural controller has been designed for a quadrotor robot in search of a moving target. The results of the simulation for different paths show that the quadrotor has efficiently tracked the moving target. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  5. PDV-based estimation of high-speed ejecta particles density from shock-loaded tin plate

    Science.gov (United States)

    Franzkowiak, Jean-Eloi; Prudhomme, Gabriel; Mercier, Patrick; Lauriot, Séverine; Dubreuil, Estelle; Berthe, Laurent

    2017-06-01

    A machine-grooved metallic tin surface is explosively driven by a detonator with a shock-induced pressure of 25 GPa. The resulting dynamic fragmentation process called micro-jetting is the creation of high-speed jets of matter moving faster than the bulk metallic surface. The resulting fragmentation into micron-sized metallic particles generates a self-expanding cloud of droplets, whose areal mass, velocity and size distributions are unknown. Lithium-Niobate (LN) piezoelectric pin measured areal mass and Photonic Doppler Velocimetry (PDV) was employed to get a time-velocity spectrogram of the cloud. We present both experimental mass and velocity results and relate the integrated areal mass of the cloud to the PDV power spectral density under the assumption of a power law distribution for particle sizes. A model of PDV spectrograms is described, for which speckle fluctuations are averaged out. Finally, we use our model for a Maximum Likelihood Estimation of the cloud's parameters from PDV data. The integrated areal mass deduced from the PDV analysis is in good agreement with piezoelectric results. We underline the relevance of analyzing PDV data and correlating different diagnostics to retrieve the macro-physical properties of ejecta particles.

  6. Anomaly Detection and Life Pattern Estimation for the Elderly Based on Categorization of Accumulated Data

    Science.gov (United States)

    Mori, Taketoshi; Ishino, Takahito; Noguchi, Hiroshi; Shimosaka, Masamichi; Sato, Tomomasa

    2011-06-01

    We propose a life pattern estimation method and an anomaly detection method for elderly people living alone. In our observation system for such people, we deploy some pyroelectric sensors into the house and measure the person's activities all the time in order to grasp the person's life pattern. The data are transferred successively to the operation center and displayed to the nurses in the center in a precise way. Then, the nurses decide whether the data is the anomaly or not. In the system, the people whose features in their life resemble each other are categorized as the same group. Anomalies occurred in the past are shared in the group and utilized in the anomaly detection algorithm. This algorithm is based on "anomaly score." The "anomaly score" is figured out by utilizing the activeness of the person. This activeness is approximately proportional to the frequency of the sensor response in a minute. The "anomaly score" is calculated from the difference between the activeness in the present and the past one averaged in the long term. Thus, the score is positive if the activeness in the present is higher than the average in the past, and the score is negative if the value in the present is lower than the average. If the score exceeds a certain threshold, it means that an anomaly event occurs. Moreover, we developed an activity estimation algorithm. This algorithm estimates the residents' basic activities such as uprising, outing, and so on. The estimation is shown to the nurses with the "anomaly score" of the residents. The nurses can understand the residents' health conditions by combining these two information.

  7. Moving horizon estimation for assimilating H-SAF remote sensing data into the HBV hydrological model

    Science.gov (United States)

    Montero, Rodolfo Alvarado; Schwanenberg, Dirk; Krahe, Peter; Lisniak, Dmytro; Sensoy, Aynur; Sorman, A. Arda; Akkol, Bulut

    2016-06-01

    Remote sensing information has been extensively developed over the past few years including spatially distributed data for hydrological applications at high resolution. The implementation of these products in operational flow forecasting systems is still an active field of research, wherein data assimilation plays a vital role on the improvement of initial conditions of streamflow forecasts. We present a novel implementation of a variational method based on Moving Horizon Estimation (MHE), in application to the conceptual rainfall-runoff model HBV, to simultaneously assimilate remotely sensed snow covered area (SCA), snow water equivalent (SWE), soil moisture (SM) and in situ measurements of streamflow data using large assimilation windows of up to one year. This innovative application of the MHE approach allows to simultaneously update precipitation, temperature, soil moisture as well as upper and lower zones water storages of the conceptual model, within the assimilation window, without an explicit formulation of error covariance matrixes and it enables a highly flexible formulation of distance metrics for the agreement of simulated and observed variables. The framework is tested in two data-dense sites in Germany and one data-sparse environment in Turkey. Results show a potential improvement of the lead time performance of streamflow forecasts by using perfect time series of state variables generated by the simulation of the conceptual rainfall-runoff model itself. The framework is also tested using new operational data products from the Satellite Application Facility on Support to Operational Hydrology and Water Management (H-SAF) of EUMETSAT. This study is the first application of H-SAF products to hydrological forecasting systems and it verifies their added value. Results from assimilating H-SAF observations lead to a slight reduction of the streamflow forecast skill in all three cases compared to the assimilation of streamflow data only. On the other hand

  8. Temperature-based estimation of global solar radiation using soft computing methodologies

    Science.gov (United States)

    Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak

    2016-07-01

    Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.

  9. A Synthetic Algorithm for Tracking a Moving Object in a Multiple-Dynamic Obstacles Environment Based on Kinematically Planar Redundant Manipulators

    Directory of Open Access Journals (Sweden)

    Hongzhe Jin

    2017-01-01

    Full Text Available This paper presents a synthetic algorithm for tracking a moving object in a multiple-dynamic obstacles environment based on kinematically planar manipulators. By observing the motions of the object and obstacles, Spline filter associated with polynomial fitting is utilized to predict their moving paths for a period of time in the future. Several feasible paths for the manipulator in Cartesian space can be planned according to the predicted moving paths and the defined feasibility criterion. The shortest one among these feasible paths is selected as the optimized path. Then the real-time path along the optimized path is planned for the manipulator to track the moving object in real-time. To improve the convergence rate of tracking, a virtual controller based on PD controller is designed to adaptively adjust the real-time path. In the process of tracking, the null space of inverse kinematic and the local rotation coordinate method (LRCM are utilized for the arms and the end-effector to avoid obstacles, respectively. Finally, the moving object in a multiple-dynamic obstacles environment is thus tracked via real-time updating the joint angles of manipulator according to the iterative method. Simulation results show that the proposed algorithm is feasible to track a moving object in a multiple-dynamic obstacles environment.

  10. Extended Kalman filter-based methods for pose estimation using visual, inertial and magnetic sensors: comparative analysis and performance evaluation.

    Science.gov (United States)

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2013-02-04

    In this paper measurements from a monocular vision system are fused with inertial/magnetic measurements from an Inertial Measurement Unit (IMU) rigidly connected to the camera. Two Extended Kalman filters (EKFs) were developed to estimate the pose of the IMU/camera sensor moving relative to a rigid scene (ego-motion), based on a set of fiducials. The two filters were identical as for the state equation and the measurement equations of the inertial/magnetic sensors. The DLT-based EKF exploited visual estimates of the ego-motion using a variant of the Direct Linear Transformation (DLT) method; the error-driven EKF exploited pseudo-measurements based on the projection errors from measured two-dimensional point features to the corresponding three-dimensional fiducials. The two filters were off-line analyzed in different experimental conditions and compared to a purely IMU-based EKF used for estimating the orientation of the IMU/camera sensor. The DLT-based EKF was more accurate than the error-driven EKF, less robust against loss of visual features, and equivalent in terms of computational complexity. Orientation root mean square errors (RMSEs) of 1° (1.5°), and position RMSEs of 3.5 mm (10 mm) were achieved in our experiments by the DLT-based EKF (error-driven EKF); by contrast, orientation RMSEs of 1.6° were achieved by the purely IMU-based EKF.

  11. Using groundwater levels to estimate recharge

    Science.gov (United States)

    Healy, R.W.; Cook, P.G.

    2002-01-01

    Accurate estimation of groundwater recharge is extremely important for proper management of groundwater systems. Many different approaches exist for estimating recharge. This paper presents a review of methods that are based on groundwater-level data. The water-table fluctuation method may be the most widely used technique for estimating recharge; it requires knowledge of specific yield and changes in water levels over time. Advantages of this approach include its simplicity and an insensitivity to the mechanism by which water moves through the unsaturated zone. Uncertainty in estimates generated by this method relate to the limited accuracy with which specific yield can be determined and to the extent to which assumptions inherent in the method are valid. Other methods that use water levels (mostly based on the Darcy equation) are also described. The theory underlying the methods is explained. Examples from the literature are used to illustrate applications of the different methods.

  12. Thyroid doses in Belarus resulting from the Chernobyl accident: comparison of the estimates based on direct thyroid measurements and on measurements of 131I in milk

    International Nuclear Information System (INIS)

    Shinkarev, Sergey; Gavrilin, Yury; Khrouch, Valery; Savkin, Mikhail; Bouville, Andre; Luckyanov, Nicholas

    2008-01-01

    A substantial increase of childhood cancer cases observed in Belarus, Ukraine and Russia after the Chernobyl accident has been associated with thyroid exposure to radio iodines following the accident. A large number of direct thyroid measurements (i.e. measurement of the exposure rate near the thyroid of the subject)were conducted in Belarus during a few weeks after the accident. Individual thyroid doses based on results of the direct thyroid measurements were estimated for about 126,000 Belarusian residents and settlement-average thyroid doses for adults were calculated for 426 contaminated settlements in Gomel and Mogilev Oblasts. Another set of settlement-average thyroid doses for adults was estimated based on results of activity measurements in milk samples for 28 settlements (with not less than 2 spectrometric measurements) and 155 settlements (with not less than 5 total beta-activity measurements) in Gomel and Mogilev Oblasts. Concentrations of 131 I in milk were derived from these measurements. In the estimation of this set of thyroid doses, it was assumed that adults consumed 0.5 L d -1 of milk locally produced. The two sets of dose estimates were compared for 47 settlements, for which simultaneously were available a dose estimate based on thyroid measurements and a dose estimate based either on spectrometric or radiometric milk data. The settlement average thyroid doses based on milk activity measurements were higher than those based on direct thyroid measurements by a factor of 1.8 for total beta-activity measurements (30 settlements were compared) and by a factor of 2.4 for spectrometric measurements (17 settlements). This systematic difference can be explained by overestimation of the milk consumption rate used in the calculation of the milk-based thyroid doses and/or by application of individual countermeasures by people. (author)

  13. Optimization of Moving Coil Actuators for Digital Displacement Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Bech, Michael Møller; Roemer, Daniel Beck

    2016-01-01

    This paper focuses on deriving an optimal moving coil actuator design, used as force pro-ducing element in hydraulic on/off valves for Digital Displacement machines. Different moving coil actuator geometry topologies (permanent magnet placement and magnetiza-tion direction) are optimized for actu......This paper focuses on deriving an optimal moving coil actuator design, used as force pro-ducing element in hydraulic on/off valves for Digital Displacement machines. Different moving coil actuator geometry topologies (permanent magnet placement and magnetiza-tion direction) are optimized...... for actuating annular seat valves in a digital displacement machine. The optimization objectives are to the minimize the actuator power, the valve flow losses and the height of the actuator. Evaluation of the objective function involves static finite element simulation and simulation of an entire operation...... designs requires approximately 20 W on average and may be realized in 20 mm × Ø 22.5 mm (height × diameter) for a 20 kW pressure chamber. The optimization is carried out using the multi-objective Generalized Differential Evolu-tion optimization algorithm GDE3 which successfully handles constrained multi-objective...

  14. Assessment of Antarctic Ice-Sheet Mass Balance Estimates: 1992 - 2009

    Science.gov (United States)

    Zwally, H. Jay; Giovinetto, Mario B.

    2011-01-01

    Published mass balance estimates for the Antarctic Ice Sheet (AIS) lie between approximately +50 to -250 Gt/year for 1992 to 2009, which span a range equivalent to 15% of the annual mass input and 0.8 mm/year Sea Level Equivalent (SLE). Two estimates from radar-altimeter measurements of elevation change by European Remote-sensing Satellites (ERS) (+28 and -31 Gt/year) lie in the upper part, whereas estimates from the Input-minus-Output Method (IOM) and the Gravity Recovery and Climate Experiment (GRACE) lie in the lower part (-40 to -246 Gt/year). We compare the various estimates, discuss the methodology used, and critically assess the results. Although recent reports of large and accelerating rates of mass loss from GRACE=based studies cite agreement with IOM results, our evaluation does not support that conclusion. We find that the extrapolation used in the published IOM estimates for the 15 % of the periphery for which discharge velocities are not observed gives twice the rate of discharge per unit of associated ice-sheet area than the 85% faster-moving parts. Our calculations show that the published extrapolation overestimates the ice discharge by 282 Gt/yr compared to our assumption that the slower moving areas have 70% as much discharge per area as the faster moving parts. Also, published data on the time-series of discharge velocities and accumulation/precipitation do not support mass output increases or input decreases with time, respectively. Our modified IOM estimate, using the 70% discharge assumption and substituting input from a field-data compilation for input from an atmospheric model over 6% of area, gives a loss of only 13 Gt/year (versus 136 Gt/year) for the period around 2000. Two ERS-based estimates, our modified IOM, and a GRACE-based estimate for observations within 1992 to 2005 lie in a narrowed range of +27 to - 40 Gt/year, which is about 3% of the annual mass input and only 0.2 mm/year SLE. Our preferred estimate for 1992-2001 is - 47 Gt

  15. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  16. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones

    Directory of Open Access Journals (Sweden)

    Huaiyu Li

    2015-12-01

    Full Text Available Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.

  17. Co-estimation of state-of-charge, capacity and resistance for lithium-ion batteries based on a high-fidelity electrochemical model

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhang, Lei; Zhu, Jianguo; Wang, Guoxiu; Jiang, Jiuchun

    2016-01-01

    Highlights: • The numerical solution for an electrochemical model is presented. • Trinal PI observers are used to concurrently estimate SOC, capacity and resistance. • An iteration-approaching method is incorporated to enhance estimation performance. • The robustness against aging and temperature variations is experimentally verified. - Abstract: Lithium-ion batteries have been widely used as enabling energy storage in many industrial fields. Accurate modeling and state estimation play fundamental roles in ensuring safe, reliable and efficient operation of lithium-ion battery systems. A physics-based electrochemical model (EM) is highly desirable for its inherent ability to push batteries to operate at their physical limits. For state-of-charge (SOC) estimation, the continuous capacity fade and resistance deterioration are more prone to erroneous estimation results. In this paper, trinal proportional-integral (PI) observers with a reduced physics-based EM are proposed to simultaneously estimate SOC, capacity and resistance for lithium-ion batteries. Firstly, a numerical solution for the employed model is derived. PI observers are then developed to realize the co-estimation of battery SOC, capacity and resistance. The moving-window ampere-hour counting technique and the iteration-approaching method are also incorporated for the estimation accuracy improvement. The robustness of the proposed approach against erroneous initial values, different battery cell aging levels and ambient temperatures is systematically evaluated, and the experimental results verify the effectiveness of the proposed method.

  18. Effects of sampling conditions on DNA-based estimates of American black bear abundance

    Science.gov (United States)

    Laufenberg, Jared S.; Van Manen, Frank T.; Clark, Joseph D.

    2013-01-01

    DNA-based capture-mark-recapture techniques are commonly used to estimate American black bear (Ursus americanus) population abundance (N). Although the technique is well established, many questions remain regarding study design. In particular, relationships among N, capture probability of heterogeneity mixtures A and B (pA and pB, respectively, or p, collectively), the proportion of each mixture (π), number of capture occasions (k), and probability of obtaining reliable estimates of N are not fully understood. We investigated these relationships using 1) an empirical dataset of DNA samples for which true N was unknown and 2) simulated datasets with known properties that represented a broader array of sampling conditions. For the empirical data analysis, we used the full closed population with heterogeneity data type in Program MARK to estimate N for a black bear population in Great Smoky Mountains National Park, Tennessee. We systematically reduced the number of those samples used in the analysis to evaluate the effect that changes in capture probabilities may have on parameter estimates. Model-averaged N for females and males were 161 (95% CI = 114–272) and 100 (95% CI = 74–167), respectively (pooled N = 261, 95% CI = 192–419), and the average weekly p was 0.09 for females and 0.12 for males. When we reduced the number of samples of the empirical data, support for heterogeneity models decreased. For the simulation analysis, we generated capture data with individual heterogeneity covering a range of sampling conditions commonly encountered in DNA-based capture-mark-recapture studies and examined the relationships between those conditions and accuracy (i.e., probability of obtaining an estimated N that is within 20% of true N), coverage (i.e., probability that 95% confidence interval includes true N), and precision (i.e., probability of obtaining a coefficient of variation ≤20%) of estimates using logistic regression. The capture probability

  19. Call Arrival Rate Prediction and Blocking Probability Estimation for Infrastructure based Mobile Cognitive Radio Personal Area Network

    Directory of Open Access Journals (Sweden)

    Neeta Nathani

    2017-08-01

    Full Text Available The Cognitive Radio usage has been estimated as non-emergency service with low volume traffic. Present work proposes an infrastructure based Cognitive Radio network and probability of success of CR traffic in licensed band. The Cognitive Radio nodes will form cluster. The cluster nodes will communicate on Industrial, Scientific and Medical band using IPv6 over Low-Power Wireless Personal Area Network based protocol from sensor to Gateway Cluster Head. For Cognitive Radio-Media Access Control protocol for Gateway to Cognitive Radio-Base Station communication, it will use vacant channels of licensed band. Standalone secondary users of Cognitive Radio Network shall be considered as a Gateway with one user. The Gateway will handle multi-channel multi radio for communication with Base Station. Cognitive Radio Network operators shall define various traffic data accumulation counters at Base Station for storing signal strength, Carrier-to-Interference and Noise Ratio, etc. parameters and record channel occupied/vacant status. The researches has been done so far using hour as interval is too long for parameters like holding time expressed in minutes and hence channel vacant/occupied status time is only probabilistically calculated. In the present work, an infrastructure based architecture has been proposed which polls channel status each minute in contrary to hourly polling of data. The Gateways of the Cognitive Radio Network shall monitor status of each Primary User periodically inside its working range and shall inform to Cognitive Radio- Base Station for preparation of minutewise database. For simulation, the occupancy data for all primary user channels were pulled in one minute interval from a live mobile network. Hourly traffic data and minutewise holding times has been analyzed to optimize the parameters of Seasonal Auto Regressive Integrated Moving Average prediction model. The blocking probability of an incoming Cognitive Radio call has been

  20. Dynamic Response of a Beam Resting on a Nonlinear Foundation to a Moving Load: Coiflet-Based Solution

    Directory of Open Access Journals (Sweden)

    Piotr Koziol

    2012-01-01

    Full Text Available This paper presents a new semi-analytical solution for the Timoshenko beam subjected to a moving load in case of a nonlinear medium underneath. The finite series of distributed moving loads harmonically varying in time is considered as a representation of a moving train. The solution for vibrations is obtained by using the Adomian's decomposition combined with the Fourier transform and a wavelet-based procedure for its computation. The adapted approximating method uses wavelet filters of Coiflet type that appeared a very effective tool for vibration analysis in a few earlier papers. The developed approach provides solutions for both transverse displacement and angular rotation of the beam, which allows parametric analysis of the investigated dynamic system to be conducted in an efficient manner. The aim of this article is to present an effective method of approximation for the analysis of complex dynamic nonlinear models related to the moving load problems.

  1. Comparison of wintertime CO to NOx ratios to MOVES and MOBILE6.2 on-road emissions inventories

    Science.gov (United States)

    Wallace, H. W.; Jobson, B. T.; Erickson, M. H.; McCoskey, J. K.; VanReken, T. M.; Lamb, B. K.; Vaughan, J. K.; Hardy, R. J.; Cole, J. L.; Strachan, S. M.; Zhang, W.

    2012-12-01

    The CO-to-NOx molar emission ratios from the US EPA vehicle emissions models MOVES and MOBILE6.2 were compared to urban wintertime measurements of CO and NOx. Measurements of CO, NOx, and volatile organic compounds were made at a regional air monitoring site in Boise, Idaho for 2 months from December 2008 to January 2009. The site is impacted by roadway emissions from a nearby busy urban arterial roads and highway. The measured CO-to-NOx ratio for morning rush hour periods was 4.2 ± 0.6. The average CO-to-NOx ratio during weekdays between the hours of 08:00 and 18:00 when vehicle miles travelled were highest was 5.2 ± 0.5. For this time period, MOVES yields an average hourly CO-to-NOx ratio of 9.1 compared to 20.2 for MOBILE6.2. Off-network emissions are a significant fraction of the CO and NOx emissions in MOVES, accounting for 65% of total CO emissions, and significantly increase the CO-to-NOx molar ratio. Observed ratios were more similar to the average hourly running emissions for urban roads determined by MOVES to be 4.3.

  2. Design-based stereological estimation of the total number of cardiac myocytes in histological sections

    DEFF Research Database (Denmark)

    Brüel, Annemarie; Nyengaard, Jens Randel

    2005-01-01

    in LM sections using design-based stereology. MATERIALS AND METHODS: From formalin-fixed left rat ventricles (LV) isotropic uniformly random sections were cut. The total number of myocyte nuclei per LV was estimated using the optical disector. Two-microm-thick serial paraffin sections were stained......BACKGROUND: Counting the total number of cardiac myocytes has not previously been possible in ordinary histological sections using light microscopy (LM) due to difficulties in defining the myocyte borders properly. AIM: To describe a method by which the total number of cardiac myocytes is estimated...... with antibodies against cadherin and type IV collagen to visualise the intercalated discs and the myocyte membranes, respectively. Using the physical disector in "local vertical windows" of the serial sections, the average number of nuclei per myocyte was estimated.RESULTS: The total number of myocyte nuclei...

  3. A NEM diffusion code for fuel management and time average core calculation

    International Nuclear Information System (INIS)

    Mishra, Surendra; Ray, Sherly; Kumar, A.N.

    2005-01-01

    A computer code based on Nodal expansion method has been developed for solving two groups three dimensional diffusion equation. This code can be used for fuel management and time average core calculation. Explicit Xenon and fuel temperature estimation are also incorporated in this code. TAPP-4 phase-B physics experimental results were analyzed using this code and a code based on FD method. This paper gives the comparison of the observed data and the results obtained with this code and FD code. (author)

  4. Probabilistic multiobjective wind-thermal economic emission dispatch based on point estimated method

    International Nuclear Information System (INIS)

    Azizipanah-Abarghooee, Rasoul; Niknam, Taher; Roosta, Alireza; Malekpour, Ahmad Reza; Zare, Mohsen

    2012-01-01

    In this paper, wind power generators are being incorporated in the multiobjective economic emission dispatch problem which minimizes wind-thermal electrical energy cost and emissions produced by fossil-fueled power plants, simultaneously. Large integration of wind energy sources necessitates an efficient model to cope with uncertainty arising from random wind variation. Hence, a multiobjective stochastic search algorithm based on 2m point estimated method is implemented to analyze the probabilistic wind-thermal economic emission dispatch problem considering both overestimation and underestimation of available wind power. 2m point estimated method handles the system uncertainties and renders the probability density function of desired variables efficiently. Moreover, a new population-based optimization algorithm called modified teaching-learning algorithm is proposed to determine the set of non-dominated optimal solutions. During the simulation, the set of non-dominated solutions are kept in an external memory (repository). Also, a fuzzy-based clustering technique is implemented to control the size of the repository. In order to select the best compromise solution from the repository, a niching mechanism is utilized such that the population will move toward a smaller search space in the Pareto-optimal front. In order to show the efficiency and feasibility of the proposed framework, three different test systems are represented as case studies. -- Highlights: ► WPGs are being incorporated in the multiobjective economic emission dispatch problem. ► 2m PEM handles the system uncertainties. ► A MTLBO is proposed to determine the set of non-dominated (Pareto) optimal solutions. ► A fuzzy-based clustering technique is implemented to control the size of the repository.

  5. Kinect-Based Moving Human Tracking System with Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Abdel Mehsen Ahmad

    2017-04-01

    Full Text Available This paper is an extension of work originally presented and published in IEEE International Multidisciplinary Conference on Engineering Technology (IMCET. This work presents a design and implementation of a moving human tracking system with obstacle avoidance. The system scans the environment by using Kinect, a 3D sensor, and tracks the center of mass of a specific user by using Processing, an open source computer programming language. An Arduino microcontroller is used to drive motors enabling it to move towards the tracked user and avoid obstacles hampering the trajectory. The implemented system is tested under different lighting conditions and the performance is analyzed using several generated depth images.

  6. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    Science.gov (United States)

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  7. Using autoregressive integrated moving average (ARIMA models to predict and monitor the number of beds occupied during a SARS outbreak in a tertiary hospital in Singapore

    Directory of Open Access Journals (Sweden)

    Earnest Arul

    2005-05-01

    Full Text Available Abstract Background The main objective of this study is to apply autoregressive integrated moving average (ARIMA models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. Methods This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. Results We found that the ARIMA (1,0,3 model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. Conclusion ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious

  8. Using autoregressive integrated moving average (ARIMA) models to predict and monitor the number of beds occupied during a SARS outbreak in a tertiary hospital in Singapore.

    Science.gov (United States)

    Earnest, Arul; Chen, Mark I; Ng, Donald; Sin, Leo Yee

    2005-05-11

    The main objective of this study is to apply autoregressive integrated moving average (ARIMA) models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases) and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. We found that the ARIMA (1,0,3) model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE) for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious diseases as well.

  9. Saccadic interception of a moving visual target after a spatiotemporal perturbation.

    Science.gov (United States)

    Fleuriet, Jérome; Goffart, Laurent

    2012-01-11

    Animals can make saccadic eye movements to intercept a moving object at the right place and time. Such interceptive saccades indicate that, despite variable sensorimotor delays, the brain is able to estimate the current spatiotemporal (hic et nunc) coordinates of a target at saccade end. The present work further tests the robustness of this estimate in the monkey when a change in eye position and a delay are experimentally added before the onset of the saccade and in the absence of visual feedback. These perturbations are induced by brief microstimulation in the deep superior colliculus (dSC). When the microstimulation moves the eyes in the direction opposite to the target motion, a correction saccade brings gaze back on the target path or very near. When it moves the eye in the same direction, the performance is more variable and depends on the stimulated sites. Saccades fall ahead of the target with an error that increases when the stimulation is applied more caudally in the dSC. The numerous cases of compensation indicate that the brain is able to maintain an accurate and robust estimate of the location of the moving target. The inaccuracies observed when stimulating the dSC that encodes the visual field traversed by the target indicate that dSC microstimulation can interfere with signals encoding the target motion path. The results are discussed within the framework of the dual-drive and the remapping hypotheses.

  10. Grip Force and 3D Push-Pull Force Estimation Based on sEMG and GRNN

    Directory of Open Access Journals (Sweden)

    Changcheng Wu

    2017-06-01

    Full Text Available The estimation of the grip force and the 3D push-pull force (push and pull force in the three dimension space from the electromyogram (EMG signal is of great importance in the dexterous control of the EMG prosthetic hand. In this paper, an action force estimation method which is based on the eight channels of the surface EMG (sEMG and the Generalized Regression Neural Network (GRNN is proposed to meet the requirements of the force control of the intelligent EMG prosthetic hand. Firstly, the experimental platform, the acquisition of the sEMG, the feature extraction of the sEMG and the construction of GRNN are described. Then, the multi-channels of the sEMG when the hand is moving are captured by the EMG sensors attached on eight different positions of the arm skin surface. Meanwhile, a grip force sensor and a three dimension force sensor are adopted to measure the output force of the human's hand. The characteristic matrix of the sEMG and the force signals are used to construct the GRNN. The mean absolute value and the root mean square of the estimation errors, the correlation coefficients between the actual force and the estimated force are employed to assess the accuracy of the estimation. Analysis of variance (ANOVA is also employed to test the difference of the force estimation. The experiments are implemented to verify the effectiveness of the proposed estimation method and the results show that the output force of the human's hand can be correctly estimated by using sEMG and GRNN method.

  11. Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2016-08-01

    Full Text Available This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to O P ( n − 4 / 9 , which is better than the convergence rate O P ( n − 1 / 4 for the procedure based on the original noisy process, where n is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.

  12. Stoichiometry-based estimates of ferric iron in calcic, sodic-calcic and sodic amphiboles: a comparison of various methods

    Directory of Open Access Journals (Sweden)

    Gualda Guilherme A.R.

    2005-01-01

    Full Text Available An important drawback of the electron microprobe is its inability to quantify Fe3+/Fe2+ ratios in routine work. Although these ratios can be calculated, there is no unique criterion that can be applied to all amphiboles. Using a large data set of calcic, sodic-calcic, and sodic amphibole analysis from A-type granites and syenites from southern Brazil, weassess the choices made by the method of Schumacher (1997, Canadian Mineralogist, 35: 238-246, which uses the average between selected maximum and minimum estimates. Maximum estimates selected most frequently are: 13 cations excluding Ca, Na, and K (13eCNK - 66%; sum of Si and Al equal to 8 (8SiAl - 17%; 15 cations excluding K (15eK - 8%. These selections are appropriate based on crystallochemical considerations. Minimum estimates are mostly all iron as Fe2+ (all Fe2 - 71%, and are clearly inadequate. Hence, maximum estimates should better approximate the actual values. To test this, complete analyses were selected from the literature, and calculated and measured values were compared. 13eCNK and maximum estimates are precise and accurate (concordance correlation coefficient- r c " 0.85. As expected, averages yield poor estimates (r c = 0.56. We recommend, thus, that maximum estimates be used for calcic, sodic-calcic, and sodic amphiboles.

  13. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  14. Estimation After a Group Sequential Trial.

    Science.gov (United States)

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  15. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    Science.gov (United States)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  16. A proposed selection index for feedlot profitability based on estimated breeding values.

    Science.gov (United States)

    van der Westhuizen, R R; van der Westhuizen, J

    2009-04-22

    It is generally accepted that feed intake and growth (gain) are the most important economic components when calculating profitability in a growth test or feedlot. We developed a single post-weaning growth (feedlot) index based on the economic values of different components. Variance components, heritabilities and genetic correlations for and between initial weight (IW), final weight (FW), feed intake (FI), and shoulder height (SHD) were estimated by multitrait restricted maximum likelihood procedures. The estimated breeding values (EBVs) and the economic values for IW, FW and FI were used in a selection index to estimate a post-weaning or feedlot profitability value. Heritabilities for IW, FW, FI, and SHD were 0.41, 0.40, 0.33, and 0.51, respectively. The highest genetic correlations were 0.78 (between IW and FW) and 0.70 (between FI and FW). EBVs were used in a selection index to calculate a single economical value for each animal. This economic value is an indication of the gross profitability value or the gross test value (GTV) of the animal in a post-weaning growth test. GTVs varied between -R192.17 and R231.38 with an average of R9.31 and a standard deviation of R39.96. The Pearson correlations between EBVs (for production and efficiency traits) and GTV ranged from -0.51 to 0.68. The lowest correlation (closest to zero) was 0.26 between the Kleiber ratio and GTV. Correlations of 0.68 and -0.51 were estimated between average daily gain and GTV and feed conversion ratio and GTV, respectively. These results showed that it is possible to select for GTV. The selection index can benefit feedlotting in selecting offspring of bulls with high GTVs to maximize profitability.

  17. Maintenance of order in a moving strong condensate

    International Nuclear Information System (INIS)

    Whitehouse, Justin; Costa, André; Blythe, Richard A; Evans, Martin R

    2014-01-01

    We investigate the conditions under which a moving condensate may exist in a driven mass transport system. Our paradigm is a minimal mass transport model in which n − 1 particles move simultaneously from a site containing n > 1 particles to the neighbouring site in a preferred direction. In the spirit of a zero-range process the rate u(n) of this move depends only on the occupation of the departure site. We study a hopping rate u(n) = 1 + b/n α numerically and find a moving strong condensate phase for b > b c (α) for all α > 0. This phase is characterised by a condensate that moves through the system and comprises a fraction of the system's mass that tends to unity. The mass lost by the condensate as it moves is constantly replenished from the trailing tail of low occupancy sites that collectively comprise a vanishing fraction of the mass. We formulate an approximate analytical treatment of the model that allows a reasonable estimate of b c (α) to be obtained. We show numerically (for α = 1) that the transition is of mixed order, exhibiting a discontinuity in the order parameter as well as a diverging length scale as b↘b c . (paper)

  18. [Motion control of moving mirror based on fixed-mirror adjustment in FTIR spectrometer].

    Science.gov (United States)

    Li, Zhong-bing; Xu, Xian-ze; Le, Yi; Xu, Feng-qiu; Li, Jun-wei

    2012-08-01

    The performance of the uniform motion of the moving mirror, which is the only constant motion part in FTIR spectrometer, and the performance of the alignment of the fixed mirror play a key role in FTIR spectrometer, and affect the interference effect and the quality of the spectrogram and may restrict the precision and resolution of the instrument directly. The present article focuses on the research on the uniform motion of the moving mirror and the alignment of the fixed mirror. In order to improve the FTIR spectrometer, the maglev support system was designed for the moving mirror and the phase detection technology was adopted to adjust the tilt angle between the moving mirror and the fixed mirror. This paper also introduces an improved fuzzy PID control algorithm to get the accurate speed of the moving mirror and realize the control strategy from both hardware design and algorithm. The results show that the development of the moving mirror motion control system gets sufficient accuracy and real-time, which can ensure the uniform motion of the moving mirror and the alignment of the fixed mirror.

  19. Line-averaging measurement methods to estimate the gap in the CO2 balance closure – possibilities, challenges, and uncertainties

    Directory of Open Access Journals (Sweden)

    A. Ziemann

    2017-11-01

    Full Text Available An imbalance of surface energy fluxes using the eddy covariance (EC method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM and open-path Fourier-transform infrared spectroscopy (OP-FTIR will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs. A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m−2 s−1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately

  20. A novel velocity estimator using multiple frequency carriers

    DEFF Research Database (Denmark)

    Zhang, Zhuo; Jakobsson, Andreas; Nikolov, Svetoslav

    2004-01-01

    . In this paper, we propose a nonlinear least squares (NLS) estimator. Typically, NLS estimators are computationally cumbersome, in general requiring the minimization of a multidimensional and often multimodal cost function. Here, by noting that the unknown velocity will result in a common known frequency......Most modern ultrasound scanners use the so-called pulsed-wave Doppler technique to estimate the blood velocities. Among the narrowband-based methods, the autocorrelation estimator and the Fourier-based method are the most commonly used approaches. Due to the low level of the blood echo, the signal......-to-noise ratio is low, and some averaging in depth is applied to improve the estimate. Further, due to velocity gradients in space and time, the spectrum may get smeared. An alternative approach is to use a pulse with multiple frequency carriers, and do some form of averaging in the frequency domain. However...