WorldWideScience

Sample records for regressive moving average

  1. Monthly streamflow forecasting with auto-regressive integrated moving average

    Science.gov (United States)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  2. Short-term electricity prices forecasting based on support vector regression and Auto-regressive integrated moving average modeling

    International Nuclear Information System (INIS)

    Che Jinxing; Wang Jianzhou

    2010-01-01

    In this paper, we present the use of different mathematical models to forecast electricity price under deregulated power. A successful prediction tool of electricity price can help both power producers and consumers plan their bidding strategies. Inspired by that the support vector regression (SVR) model, with the ε-insensitive loss function, admits of the residual within the boundary values of ε-tube, we propose a hybrid model that combines both SVR and Auto-regressive integrated moving average (ARIMA) models to take advantage of the unique strength of SVR and ARIMA models in nonlinear and linear modeling, which is called SVRARIMA. A nonlinear analysis of the time-series indicates the convenience of nonlinear modeling, the SVR is applied to capture the nonlinear patterns. ARIMA models have been successfully applied in solving the residuals regression estimation problems. The experimental results demonstrate that the model proposed outperforms the existing neural-network approaches, the traditional ARIMA models and other hybrid models based on the root mean square error and mean absolute percentage error.

  3. Electricity demand loads modeling using AutoRegressive Moving Average (ARMA) models

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.S. [Department of Information and Communication Systems Engineering, University of the Aegean, Karlovassi, 83 200 Samos (Greece); Ekonomou, L.; Chatzarakis, G.E. [Department of Electrical Engineering Educators, ASPETE - School of Pedagogical and Technological Education, N. Heraklion, 141 21 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24100 Kalamata (Greece); Katsikas, S.K. [Department of Technology Education and Digital Systems, University of Piraeus, 150 Androutsou Srt., 18 532 Piraeus (Greece); Liatsis, P. [Division of Electrical Electronic and Information Engineering, School of Engineering and Mathematical Sciences, Information and Biomedical Engineering Centre, City University, Northampton Square, London EC1V 0HB (United Kingdom)

    2008-09-15

    This study addresses the problem of modeling the electricity demand loads in Greece. The provided actual load data is deseasonilized and an AutoRegressive Moving Average (ARMA) model is fitted on the data off-line, using the Akaike Corrected Information Criterion (AICC). The developed model fits the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on-line/adaptive modeling is required. In both cases and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise are performed. The produced results indicate that the proposed method, which is based on the multi-model partitioning theory, tackles successfully the studied problem. For validation purposes the produced results are compared with three other established order selection criteria, namely AICC, Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The developed model could be useful in the studies that concern electricity consumption and electricity prices forecasts. (author)

  4. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  5. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  6. PERAMALAN DERET WAKTU MENGGUNAKAN MODEL FUNGSI BASIS RADIAL (RBF DAN AUTO REGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA

    Directory of Open Access Journals (Sweden)

    DT Wiyanti

    2013-07-01

    Full Text Available Salah satu metode peramalan yang paling dikembangkan saat ini adalah time series, yakni menggunakan pendekatan kuantitatif dengan data masa lampau yang dijadikan acuan untuk peramalan masa depan. Berbagai penelitian telah mengusulkan metode-metode untuk menyelesaikan time series, di antaranya statistik, jaringan syaraf, wavelet, dan sistem fuzzy. Metode-metode tersebut memiliki kekurangan dan keunggulan yang berbeda. Namun permasalahan yang ada dalam dunia nyata merupakan masalah yang kompleks. Satu metode saja mungkin tidak mampu mengatasi masalah tersebut dengan baik. Dalam artikel ini dibahas penggabungan dua buah metode yaitu Auto Regressive Integrated Moving Average (ARIMA dan Radial Basis Function (RBF. Alasan penggabungan kedua metode ini adalah karena adanya asumsi bahwa metode tunggal tidak dapat secara total mengidentifikasi semua karakteristik time series. Pada artikel ini dibahas peramalan terhadap data Indeks Harga Perdagangan Besar (IHPB dan data inflasi komoditi Indonesia; kedua data berada pada rentang tahun 2006 hingga beberapa bulan di tahun 2012. Kedua data tersebut masing-masing memiliki enam variabel. Hasil peramalan metode ARIMA-RBF dibandingkan dengan metode ARIMA dan metode RBF secara individual. Hasil analisa menunjukkan bahwa dengan metode penggabungan ARIMA dan RBF, model yang diberikan memiliki hasil yang lebih akurat dibandingkan dengan penggunaan salah satu metode saja. Hal ini terlihat dalam visual plot, MAPE, dan RMSE dari semua variabel pada dua data uji coba. The accuracy of time series forecasting is the subject of many decision-making processes. Time series use a quantitative approach to employ data from the past to make forecast for the future. Many researches have proposed several methods to solve time series, such as using statistics, neural networks, wavelets, and fuzzy systems. These methods have different advantages and disadvantages. But often the problem in the real world is just too complex that a

  7. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  8. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  9. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  10. Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators.

    Science.gov (United States)

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  11. Hybrid Support Vector Regression and Autoregressive Integrated Moving Average Models Improved by Particle Swarm Optimization for Property Crime Rates Forecasting with Economic Indicators

    Directory of Open Access Journals (Sweden)

    Razana Alwee

    2013-01-01

    Full Text Available Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR and autoregressive integrated moving average (ARIMA to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  12. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  13. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  14. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Directory of Open Access Journals (Sweden)

    Jacinta Chan Phooi M'ng

    Full Text Available The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR, Malaysia (MYR, the Philippines (PHP, Singapore (SGD, and Thailand (THB through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA' in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  15. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Science.gov (United States)

    Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  16. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    Science.gov (United States)

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  17. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  18. MARD—A moving average rose diagram application for the geosciences

    Science.gov (United States)

    Munro, Mark A.; Blenkinsop, Thomas G.

    2012-12-01

    MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.

  19. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  20. Moving average rules as a source of market instability

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  1. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    Science.gov (United States)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  2. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  3. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  4. A RED modified weighted moving average for soft real-time application

    Directory of Open Access Journals (Sweden)

    Domanśka Joanna

    2014-09-01

    Full Text Available The popularity of TCP/IP has resulted in an increase in usage of best-effort networks for real-time communication. Much effort has been spent to ensure quality of service for soft real-time traffic over IP networks. The Internet Engineering Task Force has proposed some architecture components, such as Active Queue Management (AQM. The paper investigates the influence of the weighted moving average on packet waiting time reduction for an AQM mechanism: the RED algorithm. The proposed method for computing the average queue length is based on a difference equation (a recursive equation. Depending on a particular optimality criterion, proper parameters of the modified weighted moving average function can be chosen. This change will allow reducing the number of violations of timing constraints and better use of this mechanism for soft real-time transmissions. The optimization problem is solved through simulations performed in OMNeT++ and later verified experimentally on a Linux implementation

  5. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  6. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  7. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    Science.gov (United States)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  8. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  9. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  10. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  11. Using exponentially weighted moving average algorithm to defend against DDoS attacks

    CSIR Research Space (South Africa)

    Machaka, P

    2016-11-01

    Full Text Available This paper seeks to investigate the performance of the Exponentially Weighted Moving Average (EWMA) for mining big data and detection of DDoS attacks in Internet of Things (IoT) infrastructure. The paper will investigate the tradeoff between...

  12. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    Directory of Open Access Journals (Sweden)

    Samir Khaled Safi

    2014-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  13. Estimation and Forecasting in Vector Autoregressive Moving Average Models for Rich Datasets

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Kapetanios, George

    We address the issue of modelling and forecasting macroeconomic variables using rich datasets, by adopting the class of Vector Autoregressive Moving Average (VARMA) models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares (...

  14. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    Science.gov (United States)

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later.

  15. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  16. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...

  17. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  18. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    OpenAIRE

    Samir Khaled Safi

    2014-01-01

    The autocorrelation function (ACF) measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...

  19. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang; Wang, Suojin; Huang, Jianhua Z.

    2013-01-01

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non

  20. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul [Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, Korea 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Department of Radiation Oncology, Asan Medical Center, Seoul, 138-736 (Korea, Republic of); Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Radiation Physics Laboratory, Sydney Medical School, University of Sydney, 2006 (Australia)

    2011-07-15

    Purpose: In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. Methods: The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a {gamma}-test with a 3%/3 mm criterion. Results: The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the {gamma}-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation

  1. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    Science.gov (United States)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of

  2. Application of a Combined Model with Autoregressive Integrated Moving Average (ARIMA and Generalized Regression Neural Network (GRNN in Forecasting Hepatitis Incidence in Heng County, China.

    Directory of Open Access Journals (Sweden)

    Wudi Wei

    Full Text Available Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease.The autoregressive integrated moving average (ARIMA model and the generalized regression neural network (GRNN model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE, root mean square error (RMSE, mean absolute percentage error (MAPE and mean square error (MSE, were used to compare the performance among the three models.The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2(1,1,112 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models.The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County.

  3. Forecasting daily meteorological time series using ARIMA and regression models

    Science.gov (United States)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  4. Exponentially Weighted Moving Average Chart as a Suitable Tool for Nuchal Translucency Quality Review

    Czech Academy of Sciences Publication Activity Database

    Hynek, M.; Smetanová, D.; Stejskal, D.; Zvárová, Jana

    2014-01-01

    Roč. 34, č. 4 (2014), s. 367-376 ISSN 0197-3851 Institutional support: RVO:67985807 Keywords : nuchal translucency * exponentially weighted moving average model * statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.268, year: 2014

  5. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  6. Extreme Learning Machine and Moving Least Square Regression Based Solar Panel Vision Inspection

    Directory of Open Access Journals (Sweden)

    Heng Liu

    2017-01-01

    Full Text Available In recent years, learning based machine intelligence has aroused a lot of attention across science and engineering. Particularly in the field of automatic industry inspection, the machine learning based vision inspection plays a more and more important role in defect identification and feature extraction. Through learning from image samples, many features of industry objects, such as shapes, positions, and orientations angles, can be obtained and then can be well utilized to determine whether there is defect or not. However, the robustness and the quickness are not easily achieved in such inspection way. In this work, for solar panel vision inspection, we present an extreme learning machine (ELM and moving least square regression based approach to identify solder joint defect and detect the panel position. Firstly, histogram peaks distribution (HPD and fractional calculus are applied for image preprocessing. Then an ELM-based defective solder joints identification is discussed in detail. Finally, moving least square regression (MLSR algorithm is introduced for solar panel position determination. Experimental results and comparisons show that the proposed ELM and MLSR based inspection method is efficient not only in detection accuracy but also in processing speed.

  7. Relationship research between meteorological disasters and stock markets based on a multifractal detrending moving average algorithm

    Science.gov (United States)

    Li, Qingchen; Cao, Guangxi; Xu, Wei

    2018-01-01

    Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.

  8. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    Hunter, N.F. Jr.

    1990-01-01

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  9. Dosimetric consequences of planning lung treatments on 4DCT average reconstruction to represent a moving tumour

    International Nuclear Information System (INIS)

    Dunn, L.F.; Taylor, M.L.; Kron, T.; Franich, R.

    2010-01-01

    Full text: Anatomic motion during a radiotherapy treatment is one of the more significant challenges in contemporary radiation therapy. For tumours of the lung, motion due to patient respiration makes both accurate planning and dose delivery difficult. One approach is to use the maximum intensity projection (MIP) obtained from a 40 computed tomography (CT) scan and then use this to determine the treatment volume. The treatment is then planned on a 4DCT average reco struction, rather than assuming the entire ITY has a uniform tumour density. This raises the question: how well does planning on a 'blurred' distribution of density with CT values greater than lung density but less than tumour density match the true case of a tumour moving within lung tissue? The aim of this study was to answer this question, determining the dosimetric impact of using a 4D-CT average reconstruction as the basis for a radiotherapy treatment plan. To achieve this, Monte-Carlo sim ulations were undertaken using GEANT4. The geometry consisted of a tumour (diameter 30 mm) moving with a sinusoidal pattern of amplitude = 20 mm. The tumour's excursion occurs within a lung equivalent volume beyond a chest wall interface. Motion was defined parallel to a 6 MY beam. This was then compared to a single oblate tumour of a magnitude determined by the extremes of the tumour motion. The variable density of the 4DCT average tumour is simulated by a time-weighted average, to achieve the observed density gradient. The generic moving tumour geometry is illustrated in the Figure.

  10. Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data

    Science.gov (United States)

    Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti

    2018-03-01

    In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.

  11. A generalization of the preset count moving average algorithm for digital rate meters

    International Nuclear Information System (INIS)

    Arandjelovic, Vojislav; Koturovic, Aleksandar; Vukanovic, Radomir

    2002-01-01

    A generalized definition of the preset count moving average algorithm for digital rate meters has been introduced. The algorithm is based on the knowledge of time intervals between successive pulses in random-pulse sequences. The steady state and transient regimes of the algorithm have been characterized. A measure for statistical fluctuations of the successive measurement results has been introduced. The versatility of the generalized algorithm makes it suitable for application in the design of the software of modern measuring/control digital systems

  12. Quantifying walking and standing behaviour of dairy cows using a moving average based on output from an accelerometer

    DEFF Research Database (Denmark)

    Nielsen, Lars Relund; Pedersen, Asger Roer; Herskin, Mette S

    2010-01-01

    in sequences of approximately 20 s for the period of 10 min. Afterwards the cows were stimulated to move/lift the legs while standing in a cubicle. The behaviour was video recorded, and the recordings were analysed second by second for walking and standing behaviour as well as the number of steps taken....... Various algorithms for predicting walking/standing status were compared. The algorithms were all based on a limit of a moving average calculated by using one of two outputs of the accelerometer, either a motion index or a step count, and applied over periods of 3 or 5 s. Furthermore, we investigated...... the effect of additionally applying the rule: a walking period must last at least 5 s. The results indicate that the lowest misclassification rate (10%) of walking and standing was obtained based on the step count with a moving average of 3 s and with the rule applied. However, the rate of misclassification...

  13. An Exponentially Weighted Moving Average Control Chart for Bernoulli Data

    DEFF Research Database (Denmark)

    Spliid, Henrik

    2010-01-01

    of the transformation is given and its limit for small values of p is derived. Control of high yield processes is discussed and the chart is shown to perform very well in comparison with both the most common alternative EWMA chart and the CUSUM chart. The construction and the use of the proposed EWMA chart......We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function...

  14. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  15. Forecasting Rice Productivity and Production of Odisha, India, Using Autoregressive Integrated Moving Average Models

    Directory of Open Access Journals (Sweden)

    Rahul Tripathi

    2014-01-01

    Full Text Available Forecasting of rice area, production, and productivity of Odisha was made from the historical data of 1950-51 to 2008-09 by using univariate autoregressive integrated moving average (ARIMA models and was compared with the forecasted all Indian data. The autoregressive (p and moving average (q parameters were identified based on the significant spikes in the plots of partial autocorrelation function (PACF and autocorrelation function (ACF of the different time series. ARIMA (2, 1, 0 model was found suitable for all Indian rice productivity and production, whereas ARIMA (1, 1, 1 was best fitted for forecasting of rice productivity and production in Odisha. Prediction was made for the immediate next three years, that is, 2007-08, 2008-09, and 2009-10, using the best fitted ARIMA models based on minimum value of the selection criterion, that is, Akaike information criteria (AIC and Schwarz-Bayesian information criteria (SBC. The performances of models were validated by comparing with percentage deviation from the actual values and mean absolute percent error (MAPE, which was found to be 0.61 and 2.99% for the area under rice in Odisha and India, respectively. Similarly for prediction of rice production and productivity in Odisha and India, the MAPE was found to be less than 6%.

  16. Forecasting Construction Tender Price Index in Ghana using Autoregressive Integrated Moving Average with Exogenous Variables Model

    Directory of Open Access Journals (Sweden)

    Ernest Kissi

    2018-03-01

    Full Text Available Prices of construction resources keep on fluctuating due to unstable economic situations that have been experienced over the years. Clients knowledge of their financial commitments toward their intended project remains the basis for their final decision. The use of construction tender price index provides a realistic estimate at the early stage of the project. Tender price index (TPI is influenced by various economic factors, hence there are several statistical techniques that have been employed in forecasting. Some of these include regression, time series, vector error correction among others. However, in recent times the integrated modelling approach is gaining popularity due to its ability to give powerful predictive accuracy. Thus, in line with this assumption, the aim of this study is to apply autoregressive integrated moving average with exogenous variables (ARIMAX in modelling TPI. The results showed that ARIMAX model has a better predictive ability than the use of the single approach. The study further confirms the earlier position of previous research of the need to use the integrated model technique in forecasting TPI. This model will assist practitioners to forecast the future values of tender price index. Although the study focuses on the Ghanaian economy, the findings can be broadly applicable to other developing countries which share similar economic characteristics.

  17. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  18. A Two-Factor Autoregressive Moving Average Model Based on Fuzzy Fluctuation Logical Relationships

    Directory of Open Access Journals (Sweden)

    Shuang Guan

    2017-10-01

    Full Text Available Many of the existing autoregressive moving average (ARMA forecast models are based on one main factor. In this paper, we proposed a new two-factor first-order ARMA forecast model based on fuzzy fluctuation logical relationships of both a main factor and a secondary factor of a historical training time series. Firstly, we generated a fluctuation time series (FTS for two factors by calculating the difference of each data point with its previous day, then finding the absolute means of the two FTSs. We then constructed a fuzzy fluctuation time series (FFTS according to the defined linguistic sets. The next step was establishing fuzzy fluctuation logical relation groups (FFLRGs for a two-factor first-order autoregressive (AR(1 model and forecasting the training data with the AR(1 model. Then we built FFLRGs for a two-factor first-order autoregressive moving average (ARMA(1,m model. Lastly, we forecasted test data with the ARMA(1,m model. To illustrate the performance of our model, we used real Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX and Dow Jones datasets as a secondary factor to forecast TAIEX. The experiment results indicate that the proposed two-factor fluctuation ARMA method outperformed the one-factor method based on real historic data. The secondary factor may have some effects on the main factor and thereby impact the forecasting results. Using fuzzified fluctuations rather than fuzzified real data could avoid the influence of extreme values in historic data, which performs negatively while forecasting. To verify the accuracy and effectiveness of the model, we also employed our method to forecast the Shanghai Stock Exchange Composite Index (SHSECI from 2001 to 2015 and the international gold price from 2000 to 2010.

  19. O Moving Average Convergence Convergence-Divergence como Ferramenta para a Decisão de Investimentos no Mercado de Ações

    Directory of Open Access Journals (Sweden)

    Rodrigo Silva Vidotto

    2009-04-01

    Full Text Available The increase in the number of investors at Bovespa since 2000 is due to stabilized inflation and falling interest rates. The use of tools that assist investors in selling and buying stocks is very important in a competitive and risky market. The technical analysis of stocks is used to search for trends in the movements of share prices and therefore indicate a suitable moment to buy or sell stocks. Among these technical indicators is the Moving Average Convergence-Divergence [MACD], which uses the concept of moving average in its equation and is considered by financial analysts as a simple tool to operate and analyze. This article aims to assess the effectiveness of the use of the MACD to indicate the moment to purchase and sell stocks in five companies – selected at random – a total of ninety companies in the Bovespa New Market and analyze the profitability gained during 2006, taking as a reference the valorization of the Ibovespa exchange in that year. The results show that the cumulative average return of the five companies was of 26.7% against a cumulative average return of 0.90% for Ibovespa.

  20. Statistical aspects of autoregressive-moving average models in the assessment of radon mitigation

    International Nuclear Information System (INIS)

    Dunn, J.E.; Henschel, D.B.

    1989-01-01

    Radon values, as reflected by hourly scintillation counts, seem dominated by major, pseudo-periodic, random fluctuations. This methodological paper reports a moderate degree of success in modeling these data using relatively simple autoregressive-moving average models to assess the effectiveness of radon mitigation techniques in existing housing. While accounting for the natural correlation of successive observations, familiar summary statistics such as steady state estimates, standard errors, confidence limits, and tests of hypothesis are produced. The Box-Jenkins approach is used throughout. In particular, intervention analysis provides an objective means of assessing the effectiveness of an active mitigation measure, such as a fan off/on cycle. Occasionally, failure to declare a significant intervention has suggested a means of remedial action in the data collection procedure

  1. Autoregressive moving average fitting for real standard deviation in Monte Carlo power distribution calculation

    International Nuclear Information System (INIS)

    Ueki, Taro

    2010-01-01

    The noise propagation of tallies in the Monte Carlo power method can be represented by the autoregressive moving average process of orders p and p-1 (ARMA(p,p-1)], where p is an integer larger than or equal to two. The formula of the autocorrelation of ARMA(p,q), p≥q+1, indicates that ARMA(3,2) fitting is equivalent to lumping the eigenmodes of fluctuation propagation in three modes such as the slow, intermediate and fast attenuation modes. Therefore, ARMA(3,2) fitting was applied to the real standard deviation estimation of fuel assemblies at particular heights. The numerical results show that straightforward ARMA(3,2) fitting is promising but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method in MCNP with a batch size larger than one hundred and smaller than two hundred cycles for a 1100 MWe pressurized water reactor. The bias correction of low lag autocovariances in MVP/GMVP is demonstrated to have the potential of improving the average performance of ARMA(3,2) fitting. (author)

  2. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    International Nuclear Information System (INIS)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-01-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval

  3. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    Science.gov (United States)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-09-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  4. Shape and depth determinations from second moving average residual self-potential anomalies

    International Nuclear Information System (INIS)

    Abdelrahman, E M; El-Araby, T M; Essa, K S

    2009-01-01

    We have developed a semi-automatic method to determine the depth and shape (shape factor) of a buried structure from second moving average residual self-potential anomalies obtained from observed data using filters of successive window lengths. The method involves using a relationship between the depth and the shape to source and a combination of windowed observations. The relationship represents a parametric family of curves (window curves). For a fixed window length, the depth is determined for each shape factor. The computed depths are plotted against the shape factors, representing a continuous monotonically increasing curve. The solution for the shape and depth is read at the common intersection of the window curves. The validity of the method is tested on a synthetic example with and without random errors and on two field examples from Turkey and Germany. In all cases examined, the depth and the shape solutions obtained are in very good agreement with the true ones

  5. Moving Average Filter-Based Phase-Locked Loops: Performance Analysis and Design Guidelines

    DEFF Research Database (Denmark)

    Golestan, Saeed; Ramezani, Malek; Guerrero, Josep M.

    2014-01-01

    this challenge, incorporating moving average filter(s) (MAF) into the PLL structure has been proposed in some recent literature. A MAF is a linear-phase finite impulse response filter which can act as an ideal low-pass filter, if certain conditions hold. The main aim of this paper is to present the control...... design guidelines for a typical MAF-based PLL. The paper starts with the general description of MAFs. The main challenge associated with using the MAFs is then explained, and its possible solutions are discussed. The paper then proceeds with a brief overview of the different MAF-based PLLs. In each case......, the PLL block diagram description is shown, the advantages and limitations are briefly discussed, and the tuning approach (if available) is evaluated. The paper then presents two systematic methods to design the control parameters of a typical MAF-based PLL: one for the case of using a proportional...

  6. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    Energy Technology Data Exchange (ETDEWEB)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan [Department of Civil and Structural Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)

    2014-09-12

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  7. PERAMALAN PERSEDIAAN INFUS MENGGUNAKAN METODE AUTOREGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA) PADA RUMAH SAKIT UMUM PUSAT SANGLAH

    OpenAIRE

    I PUTU YUDI PRABHADIKA; NI KETUT TARI TASTRAWATI; LUH PUTU IDA HARINI

    2018-01-01

    Infusion supplies are an important thing that must be considered by the hospital in meeting the needs of patients. This study aims to predict the need for infusion of 0.9% 500 ml of NaCl and 5% 500 ml glucose infusion at Sanglah General Hospital (RSUP) Sanglah so that the hospital can estimate the many infusions needed for the next six months. The forecasting method used in this research is the autoregressive integrated moving average (ARIMA) time series method. The results of this study indi...

  8. On the speed towards the mean for continuous time autoregressive moving average processes with applications to energy markets

    International Nuclear Information System (INIS)

    Benth, Fred Espen; Taib, Che Mohd Imran Che

    2013-01-01

    We extend the concept of half life of an Ornstein–Uhlenbeck process to Lévy-driven continuous-time autoregressive moving average processes with stochastic volatility. The half life becomes state dependent, and we analyze its properties in terms of the characteristics of the process. An empirical example based on daily temperatures observed in Petaling Jaya, Malaysia, is presented, where the proposed model is estimated and the distribution of the half life is simulated. The stationarity of the dynamics yield futures prices which asymptotically tend to constant at an exponential rate when time to maturity goes to infinity. The rate is characterized by the eigenvalues of the dynamics. An alternative description of this convergence can be given in terms of our concept of half life. - Highlights: • The concept of half life is extended to Levy-driven continuous time autoregressive moving average processes • The dynamics of Malaysian temperatures are modeled using a continuous time autoregressive model with stochastic volatility • Forward prices on temperature become constant when time to maturity tends to infinity • Convergence in time to maturity is at an exponential rate given by the eigenvalues of the model temperature model

  9. Dual-component model of respiratory motion based on the periodic autoregressive moving average (periodic ARMA) method

    International Nuclear Information System (INIS)

    McCall, K C; Jeraj, R

    2007-01-01

    A new approach to the problem of modelling and predicting respiration motion has been implemented. This is a dual-component model, which describes the respiration motion as a non-periodic time series superimposed onto a periodic waveform. A periodic autoregressive moving average algorithm has been used to define a mathematical model of the periodic and non-periodic components of the respiration motion. The periodic components of the motion were found by projecting multiple inhale-exhale cycles onto a common subspace. The component of the respiration signal that is left after removing this periodicity is a partially autocorrelated time series and was modelled as an autoregressive moving average (ARMA) process. The accuracy of the periodic ARMA model with respect to fluctuation in amplitude and variation in length of cycles has been assessed. A respiration phantom was developed to simulate the inter-cycle variations seen in free-breathing and coached respiration patterns. At ±14% variability in cycle length and maximum amplitude of motion, the prediction errors were 4.8% of the total motion extent for a 0.5 s ahead prediction, and 9.4% at 1.0 s lag. The prediction errors increased to 11.6% at 0.5 s and 21.6% at 1.0 s when the respiration pattern had ±34% variations in both these parameters. Our results have shown that the accuracy of the periodic ARMA model is more strongly dependent on the variations in cycle length than the amplitude of the respiration cycles

  10. Average equilibrium charge state of 278113 ions moving in a helium gas

    International Nuclear Information System (INIS)

    Kaji, D.; Morita, K.; Morimoto, K.

    2005-01-01

    Difficulty to identify a new heavy element comes from the small production cross section. For example, the production cross section was about 0.5 pb in the case of searching for the 112th element produced by the cold fusion reaction of 208 Pb( 70 Zn,n) 277 ll2. In order to identify heavier elements than element 112, the experimental apparatus with a sensitivity of sub-pico barn level is essentially needed. A gas-filled recoil separator, in general, has a large collection efficiency compared with other recoil separators as seen from the operation principle of a gas-filled recoil separator. One of the most important parameters for a gas-filled recoil separator is the average equilibrium charge state q ave of ions moving in a used gas. This is because the recoil ion can not be properly transported to the focal plane of the separator, if the q ave of an element of interest in a gas is unknown. We have systematically measured equilibrium charge state distributions of heavy ions ( 169 Tm, 208 Pb, 193,209 Bi, 196 Po, 200 At, 203,204 Fr, 212 Ac, 234 Bk, 245 Fm, 254 No, 255 Lr, and 265 Hs) moving in a helium gas by using the gas-filled recoil separator GARIS at RIKEN. Ana then, the empirical formula on q ave of heavy ions in a helium gas was derived as a function of the velocity and the atomic number of an ion on the basis of the Tomas-Fermi model of the atom. The formula was found to be applicable to search for transactinide nuclides of 271 Ds, 272 Rg, and 277 112 produced by cold fusion reactions. Using the formula on q ave , we searched for a new isotope of element 113 produced by the cold fusion reaction of 209 Bi( 70 Zn,n) 278 113. As a result, a decay chain due to an evaporation residue of 278 113 was observed. Recently, we have successfully observed the 2nd decay chain due to an evaporation residue of 278 113. In this report, we will present experimental results in detail, and will also discuss the average equilibrium charge sate of 278 113 in a helium gas by

  11. Human Capital Theory and Internal Migration: Do Average Outcomes Distort Our View of Migrant Motives?

    Science.gov (United States)

    Korpi, Martin; Clark, William A W

    2017-05-01

    By modelling the distribution of percentage income gains for movers in Sweden, using multinomial logistic regression, this paper shows that those receiving large pecuniary returns from migration are primarily those moving to the larger metropolitan areas and those with higher education, and that there is much more variability in income gains than what is often assumed in models of average gains to migration. This suggests that human capital models of internal migration often overemphasize the job and income motive for moving, and fail to explore where and when human capital motivated migration occurs.

  12. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  13. Dynamic Regression Intervention Modeling for the Malaysian Daily Load

    Directory of Open Access Journals (Sweden)

    Fadhilah Abdrazak

    2014-05-01

    Full Text Available Malaysia is a unique country due to having both fixed and moving holidays.  These moving holidays may overlap with other fixed holidays and therefore, increase the complexity of the load forecasting activities. The errors due to holidays’ effects in the load forecasting are known to be higher than other factors.  If these effects can be estimated and removed, the behavior of the series could be better viewed.  Thus, the aim of this paper is to improve the forecasting errors by using a dynamic regression model with intervention analysis.   Based on the linear transfer function method, a daily load model consists of either peak or average is developed.  The developed model outperformed the seasonal ARIMA model in estimating the fixed and moving holidays’ effects and achieved a smaller Mean Absolute Percentage Error (MAPE in load forecast.

  14. Wavelet regression model in forecasting crude oil price

    Science.gov (United States)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  15. Synchronized moving aperture radiation therapy (SMART): average tumour trajectory for lung patients

    International Nuclear Information System (INIS)

    Neicu, Toni; Shirato, Hiroki; Seppenwoolde, Yvette; Jiang, Steve B

    2003-01-01

    Synchronized moving aperture radiation therapy (SMART) is a new technique for treating mobile tumours under development at Massachusetts General Hospital (MGH). The basic idea of SMART is to synchronize the moving radiation beam aperture formed by a dynamic multileaf collimator (DMLC) with the tumour motion induced by respiration. SMART is based on the concept of the average tumour trajectory (ATT) exhibited by a tumour during respiration. During the treatment simulation stage, tumour motion is measured and the ATT is derived. Then, the original IMRT MLC leaf sequence is modified using the ATT to compensate for tumour motion. During treatment, the tumour motion is monitored. The treatment starts when leaf motion and tumour motion are synchronized at a specific breathing phase. The treatment will halt when the tumour drifts away from the ATT and will resume when the synchronization between tumour motion and radiation beam is re-established. In this paper, we present a method to derive the ATT from measured tumour trajectory data. We also investigate the validity of the ATT concept for lung tumours during normal breathing. The lung tumour trajectory data were acquired during actual radiotherapy sessions using a real-time tumour-tracking system. SMART treatment is simulated by assuming that the radiation beam follows the derived ATT and the tumour follows the measured trajectory. In simulation, the treatment starts at exhale phase. The duty cycle of SMART delivery was calculated for various treatment times and gating thresholds, as well as for various exhale phases where the treatment begins. The simulation results show that in the case of free breathing, for 4 out of 11 lung datasets with tumour motion greater than 1 cm from peak to peak, the error in tumour tracking can be controlled to within a couple of millimetres while maintaining a reasonable delivery efficiency. That is to say, without any breath coaching/control, the ATT is a valid concept for some lung

  16. The application of moving average control charts for evaluating magnetic field quality on an individual magnet basis

    International Nuclear Information System (INIS)

    Pollock, D.A.; Gunst, R.F.; Schucany, W.R.

    1994-01-01

    SSC Collider Dipole Magnet field quality specifications define limits of variation for the population mean (Systematic) and standard deviation (RMS deviation) of allowed and unallowed multipole coefficients generated by the full collection of dipole magnets throughout the Collider operating cycle. A fundamental Quality Control issue is how to determine the acceptability of individual magnets during production, in other words taken one at a time and compared to the population parameters. Provided that the normal distribution assumptions hold, the random variation of multipoles for individual magnets may be evaluated by comparing the measured results to ± 3 x RMS tolerance, centered on the design nominal. To evaluate the local and cumulative systematic variation of the magnets against the distribution tolerance, individual magnet results need to be combined with others that come before it. This paper demonstrates a Statistical Quality Control method (the Unweighted Moving Average control chart) to evaluate individual magnet performance and process stability against population tolerances. The DESY/HERA Dipole cold skew quadrupole measurements for magnets in production order are used to evaluate non-stationarity of the mean over time for the cumulative set of magnets, as well as for a moving sample

  17. Moving Low-Carbon Transportation in Xinjiang: Evidence from STIRPAT and Rigid Regression Models

    Directory of Open Access Journals (Sweden)

    Jiefang Dong

    2016-12-01

    Full Text Available With the rapid economic development of the Xinjiang Uygur Autonomous Region, the area’s transport sector has witnessed significant growth, which in turn has led to a large increase in carbon dioxide emissions. As such, calculating of the carbon footprint of Xinjiang’s transportation sector and probing the driving factors of carbon dioxide emissions are of great significance to the region’s energy conservation and environmental protection. This paper provides an account of the growth in the carbon emissions of Xinjiang’s transportation sector during the period from 1989 to 2012. We also analyze the transportation sector’s trends and historical evolution. Combined with the STIRPAT (Stochastic Impacts by Regression on Population, Affluence and Technology model and ridge regression, this study further quantitatively analyzes the factors that influence the carbon emissions of Xinjiang’s transportation sector. The results indicate the following: (1 the total carbon emissions and per capita carbon emissions of Xinjiang’s transportation sector both continued to rise rapidly during this period; their average annual growth rates were 10.8% and 9.1%, respectively; (2 the carbon emissions of the transportation sector come mainly from the consumption of diesel and gasoline, which accounted for an average of 36.2% and 2.6% of carbon emissions, respectively; in addition, the overall carbon emission intensity of the transportation sector showed an “S”-pattern trend within the study period; (3 population density plays a dominant role in increasing carbon dioxide emissions. Population is then followed by per capita GDP and, finally, energy intensity. Cargo turnover has a more significant potential impact on and role in emission reduction than do private vehicles. This is because road freight is the primary form of transportation used across Xinjiang, and this form of transportation has low energy efficiency. These findings have important

  18. Comparing a recursive digital filter with the moving-average and sequential probability-ratio detection methods for SNM portal monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.

    1993-01-01

    The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal

  19. Power Based Phase-Locked Loop Under Adverse Conditions with Moving Average Filter for Single-Phase System

    Directory of Open Access Journals (Sweden)

    Menxi Xie

    2017-06-01

    Full Text Available High performance synchronization methord is citical for grid connected power converter. For single-phase system, power based phase-locked loop(pPLL uses a multiplier as phase detector(PD. As single-phase grid voltage is distorted, the phase error information contains ac disturbances oscillating at integer multiples of fundamental frequency which lead to detection error. This paper presents a new scheme based on moving average filter(MAF applied in-loop of pPLL. The signal characteristic of phase error is dissussed in detail. A predictive rule is adopted to compensate the delay induced by MAF, thus achieving fast dynamic response. In the case of frequency deviate from nomimal, estimated frequency is fed back to adjust the filter window length of MAF and buffer size of predictive rule. Simulation and experimental results show that proposed PLL achieves good performance under adverse grid conditions.

  20. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  1. Nonlinear Autoregressive Network with the Use of a Moving Average Method for Forecasting Typhoon Tracks

    OpenAIRE

    Tienfuan Kerh; Shin-Hung Wu

    2017-01-01

    Forecasting of a typhoon moving path may help to evaluate the potential negative impacts in the neighbourhood areas along the moving path. This study proposed a work of using both static and dynamic neural network models to link a time series of typhoon track parameters including longitude and latitude of the typhoon central location, cyclonic radius, central wind speed, and typhoon moving speed. Based on the historical records of 100 typhoons, the performances of neural network models are ev...

  2. Forecast of sea surface temperature off the Peruvian coast using an autoregressive integrated moving average model

    Directory of Open Access Journals (Sweden)

    Carlos Quispe

    2013-04-01

    Full Text Available El Niño connects globally climate, ecosystems and socio-economic activities. Since 1980 this event has been tried to be predicted, but until now the statistical and dynamical models are insuffi cient. Thus, the objective of the present work was to explore using an autoregressive moving average model the effect of El Niño over the sea surface temperature (TSM off the Peruvian coast. The work involved 5 stages: identifi cation, estimation, diagnostic checking, forecasting and validation. Simple and partial autocorrelation functions (FAC and FACP were used to identify and reformulate the orders of the model parameters, as well as Akaike information criterium (AIC and Schwarz criterium (SC for the selection of the best models during the diagnostic checking. Among the main results the models ARIMA(12,0,11 were proposed, which simulated monthly conditions in agreement with the observed conditions off the Peruvian coast: cold conditions at the end of 2004, and neutral conditions at the beginning of 2005.

  3. Experimental validation of heterogeneity-corrected dose-volume prescription on respiratory-averaged CT images in stereotactic body radiotherapy for moving tumors

    International Nuclear Information System (INIS)

    Nakamura, Mitsuhiro; Miyabe, Yuki; Matsuo, Yukinori; Kamomae, Takeshi; Nakata, Manabu; Yano, Shinsuke; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro

    2012-01-01

    The purpose of this study was to experimentally assess the validity of heterogeneity-corrected dose-volume prescription on respiratory-averaged computed tomography (RACT) images in stereotactic body radiotherapy (SBRT) for moving tumors. Four-dimensional computed tomography (CT) data were acquired while a dynamic anthropomorphic thorax phantom with a solitary target moved. Motion pattern was based on cos (t) with a constant respiration period of 4.0 sec along the longitudinal axis of the CT couch. The extent of motion (A 1 ) was set in the range of 0.0–12.0 mm at 3.0-mm intervals. Treatment planning with the heterogeneity-corrected dose-volume prescription was designed on RACT images. A new commercially available Monte Carlo algorithm of well-commissioned 6-MV photon beam was used for dose calculation. Dosimetric effects of intrafractional tumor motion were then investigated experimentally under the same conditions as 4D CT simulation using the dynamic anthropomorphic thorax phantom, films, and an ionization chamber. The passing rate of γ index was 98.18%, with the criteria of 3 mm/3%. The dose error between the planned and the measured isocenter dose in moving condition was within ± 0.7%. From the dose area histograms on the film, the mean ± standard deviation of the dose covering 100% of the cross section of the target was 102.32 ± 1.20% (range, 100.59–103.49%). By contrast, the irradiated areas receiving more than 95% dose for A 1 = 12 mm were 1.46 and 1.33 times larger than those for A 1 = 0 mm in the coronal and sagittal planes, respectively. This phantom study demonstrated that the cross section of the target received 100% dose under moving conditions in both the coronal and sagittal planes, suggesting that the heterogeneity-corrected dose-volume prescription on RACT images is acceptable in SBRT for moving tumors.

  4. Output-Only Modal Parameter Recursive Estimation of Time-Varying Structures via a Kernel Ridge Regression FS-TARMA Approach

    Directory of Open Access Journals (Sweden)

    Zhi-Sai Ma

    2017-01-01

    Full Text Available Modal parameter estimation plays an important role in vibration-based damage detection and is worth more attention and investigation, as changes in modal parameters are usually being used as damage indicators. This paper focuses on the problem of output-only modal parameter recursive estimation of time-varying structures based upon parameterized representations of the time-dependent autoregressive moving average (TARMA. A kernel ridge regression functional series TARMA (FS-TARMA recursive identification scheme is proposed and subsequently employed for the modal parameter estimation of a numerical three-degree-of-freedom time-varying structural system and a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudolinear regression FS-TARMA approach via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics in a recursive manner.

  5. Autoregressive-moving-average hidden Markov model for vision-based fall prediction-An application for walker robot.

    Science.gov (United States)

    Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro

    2017-01-01

    Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.

  6. Simple Moving Voltage Average Incremental Conductance MPPT Technique with Direct Control Method under Nonuniform Solar Irradiance Conditions

    Directory of Open Access Journals (Sweden)

    Amjad Ali

    2015-01-01

    Full Text Available A new simple moving voltage average (SMVA technique with fixed step direct control incremental conductance method is introduced to reduce solar photovoltaic voltage (VPV oscillation under nonuniform solar irradiation conditions. To evaluate and validate the performance of the proposed SMVA method in comparison with the conventional fixed step direct control incremental conductance method under extreme conditions, different scenarios were simulated. Simulation results show that in most cases SMVA gives better results with more stability as compared to traditional fixed step direct control INC with faster tracking system along with reduction in sustained oscillations and possesses fast steady state response and robustness. The steady state oscillations are almost eliminated because of extremely small dP/dV around maximum power (MP, which verify that the proposed method is suitable for standalone PV system under extreme weather conditions not only in terms of bus voltage stability but also in overall system efficiency.

  7. A robust combination approach for short-term wind speed forecasting and analysis – Combination of the ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM) forecasts using a GPR (Gaussian Process Regression) model

    International Nuclear Information System (INIS)

    Wang, Jianzhou; Hu, Jianming

    2015-01-01

    With the increasing importance of wind power as a component of power systems, the problems induced by the stochastic and intermittent nature of wind speed have compelled system operators and researchers to search for more reliable techniques to forecast wind speed. This paper proposes a combination model for probabilistic short-term wind speed forecasting. In this proposed hybrid approach, EWT (Empirical Wavelet Transform) is employed to extract meaningful information from a wind speed series by designing an appropriate wavelet filter bank. The GPR (Gaussian Process Regression) model is utilized to combine independent forecasts generated by various forecasting engines (ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM)) in a nonlinear way rather than the commonly used linear way. The proposed approach provides more probabilistic information for wind speed predictions besides improving the forecasting accuracy for single-value predictions. The effectiveness of the proposed approach is demonstrated with wind speed data from two wind farms in China. The results indicate that the individual forecasting engines do not consistently forecast short-term wind speed for the two sites, and the proposed combination method can generate a more reliable and accurate forecast. - Highlights: • The proposed approach can make probabilistic modeling for wind speed series. • The proposed approach adapts to the time-varying characteristic of the wind speed. • The hybrid approach can extract the meaningful components from the wind speed series. • The proposed method can generate adaptive, reliable and more accurate forecasting results. • The proposed model combines four independent forecasting engines in a nonlinear way.

  8. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  9. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  10. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    Science.gov (United States)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  11. Nonlinear regression and ARIMA models for precipitation chemistry in East Central Florida from 1978 to 1997

    International Nuclear Information System (INIS)

    Nickerson, David M.; Madsen, Brooks C.

    2005-01-01

    Continuous monitoring of precipitation in East Central Florida has occurred since 1978 at a sampling site located on the University of Central Florida (UCF) campus. Monthly volume-weighted average (VWA) concentration for several major analytes that are present in precipitation samples was calculated from samples collected daily. Monthly VWA concentration and wet deposition of H + , NH 4 + , Ca 2+ , Mg 2+ , NO 3 - , Cl - and SO 4 2- were evaluated by a nonlinear regression (NLR) model that considered 10-year data (from 1978 to 1987) and 20-year data (from 1978 to 1997). Little change in the NLR parameter estimates was indicated among the 10-year and 20-year evaluations except for general decreases in the predicted trends from the 10-year to the 20-year fits. Box-Jenkins autoregressive integrated moving average (ARIMA) models with linear trend were considered as an alternative to the NLR models for these data. The NLR and ARIMA model forecasts for 1998 were compared to the actual 1998 data. For monthly VWA concentration values, the two models gave similar results. For the wet deposition values, the ARIMA models performed considerably better. - Autoregressive integrated moving average models of precipitation data are an improvement over nonlinear models for the prediction of precipitation chemistry composition

  12. Reconstruction of missing daily streamflow data using dynamic regression models

    Science.gov (United States)

    Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

    2015-12-01

    River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.

  13. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  14. The Prediction of Exchange Rates with the Use of Auto-Regressive Integrated Moving-Average Models

    Directory of Open Access Journals (Sweden)

    Daniela Spiesová

    2014-10-01

    Full Text Available Currency market is recently the largest world market during the existence of which there have been many theories regarding the prediction of the development of exchange rates based on macroeconomic, microeconomic, statistic and other models. The aim of this paper is to identify the adequate model for the prediction of non-stationary time series of exchange rates and then use this model to predict the trend of the development of European currencies against Euro. The uniqueness of this paper is in the fact that there are many expert studies dealing with the prediction of the currency pairs rates of the American dollar with other currency but there is only a limited number of scientific studies concerned with the long-term prediction of European currencies with the help of the integrated ARMA models even though the development of exchange rates has a crucial impact on all levels of economy and its prediction is an important indicator for individual countries, banks, companies and businessmen as well as for investors. The results of this study confirm that to predict the conditional variance and then to estimate the future values of exchange rates, it is adequate to use the ARIMA (1,1,1 model without constant, or ARIMA [(1,7,1,(1,7] model, where in the long-term, the square root of the conditional variance inclines towards stable value.

  15. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  16. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. MOTION ARTIFACT REDUCTION IN FUNCTIONAL NEAR INFRARED SPECTROSCOPY SIGNALS BY AUTOREGRESSIVE MOVING AVERAGE MODELING BASED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    MEHDI AMIAN

    2013-10-01

    Full Text Available Functional near infrared spectroscopy (fNIRS is a technique that is used for noninvasive measurement of the oxyhemoglobin (HbO2 and deoxyhemoglobin (HHb concentrations in the brain tissue. Since the ratio of the concentration of these two agents is correlated with the neuronal activity, fNIRS can be used for the monitoring and quantifying the cortical activity. The portability of fNIRS makes it a good candidate for studies involving subject's movement. The fNIRS measurements, however, are sensitive to artifacts generated by subject's head motion. This makes fNIRS signals less effective in such applications. In this paper, the autoregressive moving average (ARMA modeling of the fNIRS signal is proposed for state-space representation of the signal which is then fed to the Kalman filter for estimating the motionless signal from motion corrupted signal. Results are compared to the autoregressive model (AR based approach, which has been done previously, and show that the ARMA models outperform AR models. We attribute it to the richer structure, containing more terms indeed, of ARMA than AR. We show that the signal to noise ratio (SNR is about 2 dB higher for ARMA based method.

  18. Failure and reliability prediction by support vector machines regression of time series data

    International Nuclear Information System (INIS)

    Chagas Moura, Marcio das; Zio, Enrico; Lins, Isis Didier; Droguett, Enrique

    2011-01-01

    Support Vector Machines (SVMs) are kernel-based learning methods, which have been successfully adopted for regression problems. However, their use in reliability applications has not been widely explored. In this paper, a comparative analysis is presented in order to evaluate the SVM effectiveness in forecasting time-to-failure and reliability of engineered components based on time series data. The performance on literature case studies of SVM regression is measured against other advanced learning methods such as the Radial Basis Function, the traditional MultiLayer Perceptron model, Box-Jenkins autoregressive-integrated-moving average and the Infinite Impulse Response Locally Recurrent Neural Networks. The comparison shows that in the analyzed cases, SVM outperforms or is comparable to other techniques. - Highlights: → Realistic modeling of reliability demands complex mathematical formulations. → SVM is proper when the relation input/output is unknown or very costly to be obtained. → Results indicate the potential of SVM for reliability time series prediction. → Reliability estimates support the establishment of adequate maintenance strategies.

  19. Neural networks prediction and fault diagnosis applied to stationary and non stationary ARMA (Autoregressive moving average) modeled time series

    International Nuclear Information System (INIS)

    Marseguerra, M.; Minoggio, S.; Rossi, A.; Zio, E.

    1992-01-01

    The correlated noise affecting many industrial plants under stationary or cyclo-stationary conditions - nuclear reactors included -has been successfully modeled by autoregressive moving average (ARMA) due to the versatility of this technique. The relatively recent neural network methods have similar features and much effort is being devoted to exploring their usefulness in forecasting and control. Identifying a signal by means of an ARMA model gives rise to the problem of selecting its correct order. Similar difficulties must be faced when applying neural network methods and, specifically, particular care must be given to the setting up of the appropriate network topology, the data normalization procedure and the learning code. In the present paper the capability of some neural networks of learning ARMA and seasonal ARMA processes is investigated. The results of the tested cases look promising since they indicate that the neural networks learn the underlying process with relative ease so that their forecasting capability may represent a convenient fault diagnosis tool. (Author)

  20. Abstract Expression Grammar Symbolic Regression

    Science.gov (United States)

    Korns, Michael F.

    This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

  1. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  2. An integrated fuzzy regression algorithm for energy consumption estimation with non-stationary data: A case study of Iran

    Energy Technology Data Exchange (ETDEWEB)

    Azadeh, A; Seraj, O [Department of Industrial Engineering and Research Institute of Energy Management and Planning, Center of Excellence for Intelligent-Based Experimental Mechanics, College of Engineering, University of Tehran, P.O. Box 11365-4563 (Iran); Saberi, M [Department of Industrial Engineering, University of Tafresh (Iran); Institute for Digital Ecosystems and Business Intelligence, Curtin University of Technology, Perth (Australia)

    2010-06-15

    This study presents an integrated fuzzy regression and time series framework to estimate and predict electricity demand for seasonal and monthly changes in electricity consumption especially in developing countries such as China and Iran with non-stationary data. Furthermore, it is difficult to model uncertain behavior of energy consumption with only conventional fuzzy regression (FR) or time series and the integrated algorithm could be an ideal substitute for such cases. At First, preferred Time series model is selected from linear or nonlinear models. For this, after selecting preferred Auto Regression Moving Average (ARMA) model, Mcleod-Li test is applied to determine nonlinearity condition. When, nonlinearity condition is satisfied, the preferred nonlinear model is selected and defined as preferred time series model. At last, the preferred model from fuzzy regression and time series model is selected by the Granger-Newbold. Also, the impact of data preprocessing on the fuzzy regression performance is considered. Monthly electricity consumption of Iran from March 1994 to January 2005 is considered as the case of this study. The superiority of the proposed algorithm is shown by comparing its results with other intelligent tools such as Genetic Algorithm (GA) and Artificial Neural Network (ANN). (author)

  3. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  4. Forecasting systems reliability based on support vector regression with genetic algorithms

    International Nuclear Information System (INIS)

    Chen, K.-Y.

    2007-01-01

    This study applies a novel neural-network technique, support vector regression (SVR), to forecast reliability in engine systems. The aim of this study is to examine the feasibility of SVR in systems reliability prediction by comparing it with the existing neural-network approaches and the autoregressive integrated moving average (ARIMA) model. To build an effective SVR model, SVR's parameters must be set carefully. This study proposes a novel approach, known as GA-SVR, which searches for SVR's optimal parameters using real-value genetic algorithms, and then adopts the optimal parameters to construct the SVR models. A real reliability data for 40 suits of turbochargers were employed as the data set. The experimental results demonstrate that SVR outperforms the existing neural-network approaches and the traditional ARIMA models based on the normalized root mean square error and mean absolute percentage error

  5. Averaging, not internal noise, limits the development of coherent motion processing

    Directory of Open Access Journals (Sweden)

    Catherine Manning

    2014-10-01

    Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.

  6. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    Science.gov (United States)

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  7. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    Science.gov (United States)

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  8. Timescale Halo: Average-Speed Targets Elicit More Positive and Less Negative Attributions than Slow or Fast Targets

    Science.gov (United States)

    Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin

    2014-01-01

    Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882

  9. Consensus in averager-copier-voter networks of moving dynamical agents

    Science.gov (United States)

    Shang, Yilun

    2017-02-01

    This paper deals with a hybrid opinion dynamics comprising averager, copier, and voter agents, which ramble as random walkers on a spatial network. Agents exchange information following some deterministic and stochastic protocols if they reside at the same site in the same time. Based on stochastic stability of Markov chains, sufficient conditions guaranteeing consensus in the sense of almost sure convergence have been obtained. The ultimate consensus state is identified in the form of an ergodicity result. Simulation studies are performed to validate the effectiveness and availability of our theoretical results. The existence/non-existence of voters and the proportion of them are unveiled to play key roles during the consensus-reaching process.

  10. Watershed regressions for pesticides (WARP) for predicting atrazine concentration in Corn Belt streams

    Science.gov (United States)

    Stone, Wesley W.; Gilliom, Robert J.

    2011-01-01

    Watershed Regressions for Pesticides (WARP) models, previously developed for atrazine at the national scale, can be improved for application to the U.S. Corn Belt region by developing region-specific models that include important watershed characteristics that are influential in predicting atrazine concentration statistics within the Corn Belt. WARP models for the Corn Belt (WARP-CB) were developed for predicting annual maximum moving-average (14-, 21-, 30-, 60-, and 90-day durations) and annual 95th-percentile atrazine concentrations in streams of the Corn Belt region. All streams used in development of WARP-CB models drain watersheds with atrazine use intensity greater than 17 kilograms per square kilometer (kg/km2). The WARP-CB models accounted for 53 to 62 percent of the variability in the various concentration statistics among the model-development sites.

  11. and Multinomial Logistic Regression

    African Journals Online (AJOL)

    This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for classifying students based on their academic performance. The predictive accuracy for each model was measured by their average Classification Correct Rate (CCR).

  12. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  13. The Efficiency of OLS Estimators of Structural Parameters in a Simple Linear Regression Model in the Calibration of the Averages Scheme

    Directory of Open Access Journals (Sweden)

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Multiple areas of research function within its scope. One of the many fundamental questions in the model concerns proving the efficiency of the most commonly used OLS estimators and examining their properties. In the literature of the subject one can find taking back to this scope and certain solutions in that regard. Methodically, they are borrowed from the multiple regression model or also from a boundary partial model. Not everything, however, is here complete and consistent. In the paper a completely new scheme is proposed, based on the implementation of the Cauchy-Schwarz inequality in the arrangement of the constraint aggregated from calibrated appropriately secondary constraints of unbiasedness which in a result of choice the appropriate calibrator for each variable directly leads to showing this property. A separate range-is a matter of choice of such a calibrator. These deliberations, on account of the volume and kinds of the calibration, were divided into a few parts. In the one the efficiency of OLS estimators is proven in a mixed scheme of the calibration by averages, that is preliminary, and in the most basic frames of the proposed methodology. In these frames the future outlines and general premises constituting the base of more distant generalizations are being created.

  14. Ethnicity at the individual and neighborhood level as an explanation for moving out of the neighborhood

    NARCIS (Netherlands)

    Schaake, K.; Burgers, J.; Mulder, C.H.

    2010-01-01

    We address the influence of both the ethnic composition of the neighborhood and the ethnicity of individual residents on moving out of neighborhoods in the Netherlands. Using the Housing Research Netherlands survey and multinomial logistic regression analyses of moving out versus not moving or

  15. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  16. FPGA based computation of average neutron flux and e-folding period for start-up range of reactors

    International Nuclear Information System (INIS)

    Ram, Rajit; Borkar, S.P.; Dixit, M.Y.; Das, Debashis

    2013-01-01

    Pulse processing instrumentation channels used for reactor applications, play a vital role to ensure nuclear safety in startup range of reactor operation and also during fuel loading and first approach to criticality. These channels are intended for continuous run time computation of equivalent reactor core neutron flux and e-folding period. This paper focuses only the computational part of these instrumentation channels which is implemented in single FPGA using 32-bit floating point arithmetic engine. The computations of average count rate, log of average count rate, log rate and reactor period are done in VHDL using digital circuit realization approach. The computation of average count rate is done using fully adaptive window size moving average method, while Taylor series expansion for logarithms is implemented in FPGA to compute log of count rate, log rate and reactor e-folding period. This paper describes the block diagrams of digital logic realization in FPGA and advantage of fully adaptive window size moving average technique over conventional fixed size moving average technique for pulse processing of reactor instrumentations. (author)

  17. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  18. Improved Multiscale Entropy Technique with Nearest-Neighbor Moving-Average Kernel for Nonlinear and Nonstationary Short-Time Biomedical Signal Analysis

    Directory of Open Access Journals (Sweden)

    S. P. Arunachalam

    2018-01-01

    Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.

  19. Identification of Civil Engineering Structures using Vector ARMA Models

    DEFF Research Database (Denmark)

    Andersen, P.

    The dissertation treats the matter of systems identification and modelling of load-bearing constructions using Auto-Regressive Moving Average Vector (ARMAV) models.......The dissertation treats the matter of systems identification and modelling of load-bearing constructions using Auto-Regressive Moving Average Vector (ARMAV) models....

  20. Move-by-move dynamics of the advantage in chess matches reveals population-level learning of the game.

    Directory of Open Access Journals (Sweden)

    Haroldo V Ribeiro

    Full Text Available The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player's advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player's advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.

  1. Move-by-move dynamics of the advantage in chess matches reveals population-level learning of the game.

    Science.gov (United States)

    Ribeiro, Haroldo V; Mendes, Renio S; Lenzi, Ervin K; del Castillo-Mussot, Marcelo; Amaral, Luís A N

    2013-01-01

    The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player's advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player's advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.

  2. Inflation, Forecast Intervals and Long Memory Regression Models

    NARCIS (Netherlands)

    C.S. Bos (Charles); Ph.H.B.F. Franses (Philip Hans); M. Ooms (Marius)

    2001-01-01

    textabstractWe examine recursive out-of-sample forecasting of monthly postwar U.S. core inflation and log price levels. We use the autoregressive fractionally integrated moving average model with explanatory variables (ARFIMAX). Our analysis suggests a significant explanatory power of leading

  3. Inflation, Forecast Intervals and Long Memory Regression Models

    NARCIS (Netherlands)

    Ooms, M.; Bos, C.S.; Franses, P.H.

    2003-01-01

    We examine recursive out-of-sample forecasting of monthly postwar US core inflation and log price levels. We use the autoregressive fractionally integrated moving average model with explanatory variables (ARFIMAX). Our analysis suggests a significant explanatory power of leading indicators

  4. Multi-pulse orbits and chaotic dynamics in motion of parametrically excited viscoelastic moving belt

    International Nuclear Information System (INIS)

    Zhang Wei; Yao Minghui

    2006-01-01

    In this paper, the Shilnikov type multi-pulse orbits and chaotic dynamics of parametrically excited viscoelastic moving belt are studied in detail. Using Kelvin-type viscoelastic constitutive law, the equations of motion for viscoelastic moving belt with the external damping and parametric excitation are given. The four-dimensional averaged equation under the case of primary parametric resonance is obtained by directly using the method of multiple scales and Galerkin's approach to the partial differential governing equation of viscoelastic moving belt. From the averaged equations obtained here, the theory of normal form is used to give the explicit expressions of normal form with a double zero and a pair of pure imaginary eigenvalues. Based on normal form, the energy-phrase method is employed to analyze the global bifurcations and chaotic dynamics in parametrically excited viscoelastic moving belt. The global bifurcation analysis indicates that there exist the heteroclinic bifurcations and the Silnikov type multi-pulse homoclinic orbits in the averaged equation. The results obtained above mean the existence of the chaos for the Smale horseshoe sense in parametrically excited viscoelastic moving belt. The chaotic motions of viscoelastic moving belts are also found by using numerical simulation. A new phenomenon on the multi-pulse jumping orbits is observed from three-dimensional phase space

  5. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  6. The Effect of Direction on Cursor Moving Kinematics

    Directory of Open Access Journals (Sweden)

    Chiu-Ping Lu

    2012-02-01

    Full Text Available There have been only few studies to substantiate the kinematic characteristics of cursor movement. In this study, a quantitative experimental research method was used to explore the effect of moving direction on the kinematics of cursor movement in 24 typical young persons using our previously developed computerized measuring program. The results of multiple one way repeated measures ANOVAs and post hoc LSD tests demonstrated that the moving direction had effects on average velocity, movement time, movement unit and peak velocity. Moving leftward showed better efficiency than moving rightward, upward and downward from the kinematic evidences such as velocity, movement unit and time. Moreover, the unique pattern of the power spectral density (PSD of velocity (strategy for power application explained why the smoothness was still maintained while moving leftward even under an unstable situation with larger momentum. Moreover, the information from this cursor moving study can guide us to relocate the toolbars and icons in the window interface, especially for individuals with physical disabilities whose performances are easily interrupted while controlling the cursor in specific directions.

  7. A landslide-quake detection algorithm with STA/LTA and diagnostic functions of moving average and scintillation index: A preliminary case study of the 2009 Typhoon Morakot in Taiwan

    Science.gov (United States)

    Wu, Yu-Jie; Lin, Guan-Wei

    2017-04-01

    Since 1999, Taiwan has experienced a rapid rise in the number of landslides, and the number even reached a peak after the 2009 Typhoon Morakot. Although it is proved that the ground-motion signals induced by slope processes could be recorded by seismograph, it is difficult to be distinguished from continuous seismic records due to the lack of distinct P and S waves. In this study, we combine three common seismic detectors including the short-term average/long-term average (STA/LTA) approach, and two diagnostic functions of moving average and scintillation index. Based on these detectors, we have established an auto-detection algorithm of landslide-quakes and the detection thresholds are defined to distinguish landslide-quake from earthquakes and background noises. To further improve the proposed detection algorithm, we apply it to seismic archives recorded by Broadband Array in Taiwan for Seismology (BATS) during the 2009 Typhoon Morakots and consequently the discrete landslide-quakes detected by the automatic algorithm are located. The detection algorithm show that the landslide-detection results are consistent with that of visual inspection and hence can be used to automatically monitor landslide-quakes.

  8. Heterogeneous CPU-GPU moving targets detection for UAV video

    Science.gov (United States)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  9. Transport of the moving barrier driven by chiral active particles

    Science.gov (United States)

    Liao, Jing-jing; Huang, Xiao-qun; Ai, Bao-quan

    2018-03-01

    Transport of a moving V-shaped barrier exposed to a bath of chiral active particles is investigated in a two-dimensional channel. Due to the chirality of active particles and the transversal asymmetry of the barrier position, active particles can power and steer the directed transport of the barrier in the longitudinal direction. The transport of the barrier is determined by the chirality of active particles. The moving barrier and active particles move in the opposite directions. The average velocity of the barrier is much larger than that of active particles. There exist optimal parameters (the chirality, the self-propulsion speed, the packing fraction, and the channel width) at which the average velocity of the barrier takes its maximal value. In particular, tailoring the geometry of the barrier and the active concentration provides novel strategies to control the transport properties of micro-objects or cargoes in an active medium.

  10. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  11. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  12. Short-term load forecasting with increment regression tree

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jingfei; Stenzel, Juergen [Darmstadt University of Techonology, Darmstadt 64283 (Germany)

    2006-06-15

    This paper presents a new regression tree method for short-term load forecasting. Both increment and non-increment tree are built according to the historical data to provide the data space partition and input variable selection. Support vector machine is employed to the samples of regression tree nodes for further fine regression. Results of different tree nodes are integrated through weighted average method to obtain the comprehensive forecasting result. The effectiveness of the proposed method is demonstrated through its application to an actual system. (author)

  13. Autoregressive moving average (ARMA) model applied to quantification of cerebral blood flow using dynamic susceptibility contrast-enhanced magnetic resonance imaging

    International Nuclear Information System (INIS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki

    2003-01-01

    The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)

  14. Middle and long-term prediction of UT1-UTC based on combination of Gray Model and Autoregressive Integrated Moving Average

    Science.gov (United States)

    Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing

    2017-02-01

    UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.

  15. Application of Fourier transform infrared spectroscopy and orthogonal projections to latent structures/partial least squares regression for estimation of procyanidins average degree of polymerisation.

    Science.gov (United States)

    Passos, Cláudia P; Cardoso, Susana M; Barros, António S; Silva, Carlos M; Coimbra, Manuel A

    2010-02-28

    Fourier transform infrared (FTIR) spectroscopy has being emphasised as a widespread technique in the quick assess of food components. In this work, procyanidins were extracted with methanol and acetone/water from the seeds of white and red grape varieties. A fractionation by graded methanol/chloroform precipitations allowed to obtain 26 samples that were characterised using thiolysis as pre-treatment followed by HPLC-UV and MS detection. The average degree of polymerisation (DPn) of the procyanidins in the samples ranged from 2 to 11 flavan-3-ol residues. FTIR spectroscopy within the wavenumbers region of 1800-700 cm(-1) allowed to build a partial least squares (PLS1) regression model with 8 latent variables (LVs) for the estimation of the DPn, giving a RMSECV of 11.7%, with a R(2) of 0.91 and a RMSEP of 2.58. The application of orthogonal projection to latent structures (O-PLS1) clarifies the interpretation of the regression model vectors. Moreover, the O-PLS procedure has removed 88% of non-correlated variations with the DPn, allowing to relate the increase of the absorbance peaks at 1203 and 1099 cm(-1) with the increase of the DPn due to the higher proportion of substitutions in the aromatic ring of the polymerised procyanidin molecules. Copyright 2009 Elsevier B.V. All rights reserved.

  16. Dog days of summer: Influences on decision of wolves to move pups

    Science.gov (United States)

    Ausband, David E.; Mitchell, Michael S.; Bassing, Sarah B.; Nordhagen, Matthew; Smith, Douglas W.; Stahler, Daniel R.

    2016-01-01

    For animals that forage widely, protecting young from predation can span relatively long time periods due to the inability of young to travel with and be protected by their parents. Moving relatively immobile young to improve access to important resources, limit detection of concentrated scent by predators, and decrease infestations by ectoparasites can be advantageous. Moving young, however, can also expose them to increased mortality risks (e.g., accidents, getting lost, predation). For group-living animals that live in variable environments and care for young over extended time periods, the influence of biotic factors (e.g., group size, predation risk) and abiotic factors (e.g., temperature and precipitation) on the decision to move young is unknown. We used data from 25 satellite-collared wolves ( Canis lupus ) in Idaho, Montana, and Yellowstone National Park to evaluate how these factors could influence the decision to move pups during the pup-rearing season. We hypothesized that litter size, the number of adults in a group, and perceived predation risk would positively affect the number of times gray wolves moved pups. We further hypothesized that wolves would move their pups more often when it was hot and dry to ensure sufficient access to water. Contrary to our hypothesis, monthly temperature above the 30-year average was negatively related to the number of times wolves moved their pups. Monthly precipitation above the 30-year average, however, was positively related to the amount of time wolves spent at pup-rearing sites after leaving the natal den. We found little relationship between risk of predation (by grizzly bears, humans, or conspecifics) or group and litter sizes and number of times wolves moved their pups. Our findings suggest that abiotic factors most strongly influence the decision of wolves to move pups, although responses to unpredictable biotic events (e.g., a predator encountering pups) cannot be ruled out.

  17. THE VELOCITY DISTRIBUTION OF NEARBY STARS FROM HIPPARCOS DATA. II. THE NATURE OF THE LOW-VELOCITY MOVING GROUPS

    International Nuclear Information System (INIS)

    Bovy, Jo; Hogg, David W.

    2010-01-01

    The velocity distribution of nearby stars (∼<100 pc) contains many overdensities or 'moving groups', clumps of comoving stars, that are inconsistent with the standard assumption of an axisymmetric, time-independent, and steady-state Galaxy. We study the age and metallicity properties of the low-velocity moving groups based on the reconstruction of the local velocity distribution in Paper I of this series. We perform stringent, conservative hypothesis testing to establish for each of these moving groups whether it could conceivably consist of a coeval population of stars. We conclude that they do not: the moving groups are neither trivially associated with their eponymous open clusters nor with any other inhomogeneous star formation event. Concerning a possible dynamical origin of the moving groups, we test whether any of the moving groups has a higher or lower metallicity than the background population of thin disk stars, as would generically be the case if the moving groups are associated with resonances of the bar or spiral structure. We find clear evidence that the Hyades moving group has higher than average metallicity and weak evidence that the Sirius moving group has lower than average metallicity, which could indicate that these two groups are related to the inner Lindblad resonance of the spiral structure. Further, we find weak evidence that the Hercules moving group has higher than average metallicity, as would be the case if it is associated with the bar's outer Lindblad resonance. The Pleiades moving group shows no clear metallicity anomaly, arguing against a common dynamical origin for the Hyades and Pleiades groups. Overall, however, the moving groups are barely distinguishable from the background population of stars, raising the likelihood that the moving groups are associated with transient perturbations.

  18. ANALISIS CURAH HUJAN DAN DEBIT MODEL SWAT DENGAN METODE MOVING AVERAGE DI DAS CILIWUNG HULU

    Directory of Open Access Journals (Sweden)

    Defri Satiya Zuma

    2017-09-01

    Full Text Available Watershed can be regarded as a hydrological system that has a function in transforming rainwater as an input into outputs such as flow and sediment. The transformation of inputs into outputs has specific forms and properties. The transformation involves many processes, including processes occurred on the surface of the land, river basins, in soil and aquifer. This study aimed to apply the SWAT model  in  Ciliwung Hulu Watershed, asses the effect of average rainfall  on 3 days, 5 days, 7 days and 10 days of the hydrological characteristics in Ciliwung Hulu Watershed. The correlation coefficient (r between rainfall and discharge was positive, it indicated that there was an unidirectional relationship between rainfall and discharge in the upstream, midstream and downstream of the watershed. The upper limit ratio of discharge had a downward trend from upstream to downstream, while the lower limit ratio of  discharge had an upward trend from upstream to downstream. It showed that the discharge peak in Ciliwung  Hulu Watershed from upstream to downstream had a downward trend while the baseflow from upstream to downstream had an upward trend. It showed that the upstream of Ciliwung Hulu Watershed had the highest ratio of discharge peak  and baseflow so it needs the soil and water conservations and technical civil measures. The discussion concluded that the SWAT model could be well applied in Ciliwung Hulu Watershed, the most affecting average rainfall on the hydrological characteristics was the average rainfall of 10 days. On average  rainfall of 10 days, all components had contributed maximally for river discharge.

  19. The Value and Feasibility of Farming Differently Than the Local Average

    OpenAIRE

    Morris, Cooper; Dhuyvetter, Kevin; Yeager, Elizabeth A; Regier, Greg

    2018-01-01

    The purpose of this research is to quantify the value of being different than the local average and feasibility of distinguishing particular parts of an operation from the local average. Kansas crop farms are broken down by their farm characteristics, production practices, and management performances. An ordinary least squares regression model is used to quantify the value of having different than average characteristics, practices, and management performances. The degree farms have distingui...

  20. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  1. On critical cases in limit theory for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Podolskij, Mark

    averages. The limit theory heavily depends on the interplay between the given order of the increments, the considered power, the Blumenthal-Getoor index of the driving pure jump Lévy process L and the behavior of the kernel function g at 0. In this work we will study the critical cases, which were...

  2. Canada’s 2010 Tax Competitiveness Ranking: Moving to the Average but Biased Against Services

    Directory of Open Access Journals (Sweden)

    Duanjie Chen

    2011-02-01

    Full Text Available For the first time since 1975 (the year Canada’s marginal effective tax rates were first measured, Canada has become the most tax-competitive country among G-7 states with respect to taxation of capital investment. Even more remarkably, Canada accomplished this feat within a mere six years, having previously been the least taxcompetitive G-7 member. Even in comparison to strongly growing emerging economies, Canada’s 2010 marginal effective tax rate on capital is still above average. The planned reductions in federal and provincial corporate taxes by 2013 will reduce Canada’s effective tax rate on new investments to 18.4 percent, below the Organization for Economic Co-operation and Development (OECD 2010 average and close to the average of the 50 non-OECD countries studied. This remarkable change in Canada’s tax competitiveness must be maintained in the coming years, as countries are continually reducing their business taxation despite the recent fiscal pressures arising from the 2008-9 downturn in the world economy. Many countries have forged ahead with significant reforms designed to increase tax competitiveness and improve tax neutrality including Greece, Israel, Japan, New Zealand, Taiwan and the United Kingdom. The continuing bias in Canada’s corporate income tax structure favouring manufacturing and processing business warrants close scrutiny. Measured by the difference between the marginal effective tax rate on capital between manufacturing and the broad range of service sectors, Canada has the greatest gap in tax burdens between manufacturing and services among OECD countries. Surprisingly, preferential tax treatment (such as fast write-off and investment tax credits favouring only manufacturing and processing activities has become the norm in Canada, although it does not exist in most developed economies.

  3. Plans, Patterns, and Move Categories Guiding a Highly Selective Search

    Science.gov (United States)

    Trippen, Gerhard

    In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.

  4. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  5. Recursive and non-linear logistic regression: moving on from the original EuroSCORE and EuroSCORE II methodologies.

    Science.gov (United States)

    Poullis, Michael

    2014-11-01

    EuroSCORE II, despite improving on the original EuroSCORE system, has not solved all the calibration and predictability issues. Recursive, non-linear and mixed recursive and non-linear regression analysis were assessed with regard to sensitivity, specificity and predictability of the original EuroSCORE and EuroSCORE II systems. The original logistic EuroSCORE, EuroSCORE II and recursive, non-linear and mixed recursive and non-linear regression analyses of these risk models were assessed via receiver operator characteristic curves (ROC) and Hosmer-Lemeshow statistic analysis with regard to the accuracy of predicting in-hospital mortality. Analysis was performed for isolated coronary artery bypass grafts (CABGs) (n = 2913), aortic valve replacement (AVR) (n = 814), mitral valve surgery (n = 340), combined AVR and CABG (n = 517), aortic (n = 350), miscellaneous cases (n = 642), and combinations of the above cases (n = 5576). The original EuroSCORE had an ROC below 0.7 for isolated AVR and combined AVR and CABG. None of the methods described increased the ROC above 0.7. The EuroSCORE II risk model had an ROC below 0.7 for isolated AVR only. Recursive regression, non-linear regression, and mixed recursive and non-linear regression all increased the ROC above 0.7 for isolated AVR. The original EuroSCORE had a Hosmer-Lemeshow statistic that was above 0.05 for all patients and the subgroups analysed. All of the techniques markedly increased the Hosmer-Lemeshow statistic. The EuroSCORE II risk model had a Hosmer-Lemeshow statistic that was significant for all patients (P linear regression failed to improve on the original Hosmer-Lemeshow statistic. The mixed recursive and non-linear regression using the EuroSCORE II risk model was the only model that produced an ROC of 0.7 or above for all patients and procedures and had a Hosmer-Lemeshow statistic that was highly non-significant. The original EuroSCORE and the EuroSCORE II risk models do not have adequate ROC and Hosmer

  6. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  7. Optimization of Moving Coil Actuators for Digital Displacement Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Bech, Michael Møller; Roemer, Daniel Beck

    2016-01-01

    This paper focuses on deriving an optimal moving coil actuator design, used as force pro-ducing element in hydraulic on/off valves for Digital Displacement machines. Different moving coil actuator geometry topologies (permanent magnet placement and magnetiza-tion direction) are optimized for actu......This paper focuses on deriving an optimal moving coil actuator design, used as force pro-ducing element in hydraulic on/off valves for Digital Displacement machines. Different moving coil actuator geometry topologies (permanent magnet placement and magnetiza-tion direction) are optimized...... for actuating annular seat valves in a digital displacement machine. The optimization objectives are to the minimize the actuator power, the valve flow losses and the height of the actuator. Evaluation of the objective function involves static finite element simulation and simulation of an entire operation...... designs requires approximately 20 W on average and may be realized in 20 mm × Ø 22.5 mm (height × diameter) for a 20 kW pressure chamber. The optimization is carried out using the multi-objective Generalized Differential Evolu-tion optimization algorithm GDE3 which successfully handles constrained multi-objective...

  8. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Comparison of wintertime CO to NOx ratios to MOVES and MOBILE6.2 on-road emissions inventories

    Science.gov (United States)

    Wallace, H. W.; Jobson, B. T.; Erickson, M. H.; McCoskey, J. K.; VanReken, T. M.; Lamb, B. K.; Vaughan, J. K.; Hardy, R. J.; Cole, J. L.; Strachan, S. M.; Zhang, W.

    2012-12-01

    The CO-to-NOx molar emission ratios from the US EPA vehicle emissions models MOVES and MOBILE6.2 were compared to urban wintertime measurements of CO and NOx. Measurements of CO, NOx, and volatile organic compounds were made at a regional air monitoring site in Boise, Idaho for 2 months from December 2008 to January 2009. The site is impacted by roadway emissions from a nearby busy urban arterial roads and highway. The measured CO-to-NOx ratio for morning rush hour periods was 4.2 ± 0.6. The average CO-to-NOx ratio during weekdays between the hours of 08:00 and 18:00 when vehicle miles travelled were highest was 5.2 ± 0.5. For this time period, MOVES yields an average hourly CO-to-NOx ratio of 9.1 compared to 20.2 for MOBILE6.2. Off-network emissions are a significant fraction of the CO and NOx emissions in MOVES, accounting for 65% of total CO emissions, and significantly increase the CO-to-NOx molar ratio. Observed ratios were more similar to the average hourly running emissions for urban roads determined by MOVES to be 4.3.

  10. GRID PRICING VERSUS AVERAGE PRICING FOR SLAUGHTER CATTLE: AN EMPIRICAL ANALYSIS

    OpenAIRE

    Fausti, Scott W.; Qasmi, Bashir A.

    1999-01-01

    The paper compares weekly producer revenue under grid pricing and average dressed weight pricing methods for 2560 cattle over a period of 102 weeks. Regression analysis is applied to identify factors affecting the revenue differential.

  11. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  12. Moving In, Moving Through, and Moving Out: The Transitional Experiences of Foster Youth College Students

    Science.gov (United States)

    Gamez, Sara I.

    2017-01-01

    The purpose of this qualitative study was to explore the transitional experiences of foster youth college students. The study explored how foster youth experienced moving into, moving through, and moving out of the college environment and what resources and strategies they used to thrive during their college transitions. In addition, this study…

  13. Seemingly Unrelated Regression Approach for GSTARIMA Model to Forecast Rain Fall Data in Malang Southern Region Districts

    Directory of Open Access Journals (Sweden)

    Siti Choirun Nisak

    2016-06-01

    Full Text Available Time series forecasting models can be used to predict phenomena that occur in nature. Generalized Space Time Autoregressive (GSTAR is one of time series model used to forecast the data consisting the elements of time and space. This model is limited to the stationary and non-seasonal data. Generalized Space Time Autoregressive Integrated Moving Average (GSTARIMA is GSTAR development model that accommodates the non-stationary and seasonal data. Ordinary Least Squares (OLS is method used to estimate parameter of GSTARIMA model. Estimation parameter of GSTARIMA model using OLS will not produce efficiently estimator if there is an error correlation between spaces. Ordinary Least Square (OLS assumes the variance-covariance matrix has a constant error ~(, but in fact, the observatory spaces are correlated so that variance-covariance matrix of the error is not constant. Therefore, Seemingly Unrelated Regression (SUR approach is used to accommodate the weakness of the OLS. SUR assumption is ~(, for estimating parameters GSTARIMA model. The method to estimate parameter of SUR is Generalized Least Square (GLS. Applications GSTARIMA-SUR models for rainfall data in the region Malang obtained GSTARIMA models ((1(1,12,36,(0,(1-SUR with determination coefficient generated with the average of 57.726%.

  14. Developing a comprehensive measure of mobility: mobility over varied environments scale (MOVES).

    Science.gov (United States)

    Hirsch, Jana A; Winters, Meghan; Sims-Gould, Joanie; Clarke, Philippa J; Ste-Marie, Nathalie; Ashe, Maureen; McKay, Heather A

    2017-05-25

    While recent work emphasizes the multi-dimensionality of mobility, no current measure incorporates multiple domains of mobility. Using existing conceptual frameworks we identified four domains of mobility (physical, cognitive, social, transportation) to create a "Mobility Over Varied Environments Scale" (MOVES). We then assessed expected patterns of MOVES in the Canadian population. An expert panel identified survey items within each MOVES domain from the Canadian Community Health Survey- Healthy Aging Cycle (2008-2009) for 28,555 (weighted population n = 12,805,067) adults (≥45 years). We refined MOVES using principal components analysis and Cronbach's alpha and weighted items so each domain was 10 points. Expected mobility trends, as assessed by average MOVES, were examined by sociodemographic and health factors, and by province, using Analysis of Variance (ANOVA). MOVES ranged from 0 to 40, where 0 represents individuals who are immobile and 40 those who are fully mobile. Mean MOVES was 29.58 (95% confidence interval (CI) 29.49, 29.67) (10th percentile: 24.17 (95% CI 23.96, 24.38), 90th percentile: 34.70 (CI 34.55, 34.85)). MOVES scores were lower for older, female, and non-white Canadians with worse health and lower socioeconomic status. MOVES was also lower for those who live in less urban areas. MOVES is a holistic measure of mobility for characterizing older adult mobility across populations. Future work should examine individual or neighborhood predictors of MOVES and its relationship to broader health outcomes. MOVES holds utility for research, surveillance, evaluation, and interventions around the broad factors influencing mobility in older adults.

  15. Random walk of passive tracers among randomly moving obstacles.

    Science.gov (United States)

    Gori, Matteo; Donato, Irene; Floriani, Elena; Nardecchia, Ilaria; Pettini, Marco

    2016-04-14

    This study is mainly motivated by the need of understanding how the diffusion behavior of a biomolecule (or even of a larger object) is affected by other moving macromolecules, organelles, and so on, inside a living cell, whence the possibility of understanding whether or not a randomly walking biomolecule is also subject to a long-range force field driving it to its target. By means of the Continuous Time Random Walk (CTRW) technique the topic of random walk in random environment is here considered in the case of a passively diffusing particle among randomly moving and interacting obstacles. The relevant physical quantity which is worked out is the diffusion coefficient of the passive tracer which is computed as a function of the average inter-obstacles distance. The results reported here suggest that if a biomolecule, let us call it a test molecule, moves towards its target in the presence of other independently interacting molecules, its motion can be considerably slowed down.

  16. Prediction of Tourist Arrivals to the Island of Bali with Holt Method of Winter and Seasonal Autoregressive Integrated Moving Average (SARIMA

    Directory of Open Access Journals (Sweden)

    Agus Supriatna

    2017-11-01

    Full Text Available The tourism sector is one of the contributors of foreign exchange is quite influential in improving the economy of Indonesia. The development of this sector will have a positive impact, including employment opportunities and opportunities for entrepreneurship in various industries such as adventure tourism, craft or hospitality. The beauty and natural resources owned by Indonesia become a tourist attraction for domestic and foreign tourists. One of the many tourist destination is the island of Bali. The island of Bali is not only famous for its natural, cultural diversity and arts but there are also add the value of tourism. In 2015 the increase in the number of tourist arrivals amounted to 6.24% from the previous year. In improving the quality of services, facing a surge of visitors, or prepare a strategy in attracting tourists need a prediction of arrival so that planning can be more efficient and effective. This research used  Holt Winter's method and Seasonal Autoregressive Integrated Moving Average (SARIMA method  to predict tourist arrivals. Based on data of foreign tourist arrivals who visited the Bali island in January 2007 until June 2016, the result of Holt Winter's method with parameter values α=0.1 ,β=0.1 ,γ=0.3 has an error MAPE is 6,171873. While the result of SARIMA method with (0,1,1〖(1,0,0〗12 model has an error MAPE is 5,788615 and it can be concluded that SARIMA method is better. Keywords: Foreign Tourist, Prediction, Bali Island, Holt-Winter’s, SARIMA.

  17. Complex regression Doppler optical coherence tomography

    Science.gov (United States)

    Elahi, Sahar; Gu, Shi; Thrane, Lars; Rollins, Andrew M.; Jenkins, Michael W.

    2018-04-01

    We introduce a new method to measure Doppler shifts more accurately and extend the dynamic range of Doppler optical coherence tomography (OCT). The two-point estimate of the conventional Doppler method is replaced with a regression that is applied to high-density B-scans in polar coordinates. We built a high-speed OCT system using a 1.68-MHz Fourier domain mode locked laser to acquire high-density B-scans (16,000 A-lines) at high enough frame rates (˜100 fps) to accurately capture the dynamics of the beating embryonic heart. Flow phantom experiments confirm that the complex regression lowers the minimum detectable velocity from 12.25 mm / s to 374 μm / s, whereas the maximum velocity of 400 mm / s is measured without phase wrapping. Complex regression Doppler OCT also demonstrates higher accuracy and precision compared with the conventional method, particularly when signal-to-noise ratio is low. The extended dynamic range allows monitoring of blood flow over several stages of development in embryos without adjusting the imaging parameters. In addition, applying complex averaging recovers hidden features in structural images.

  18. Radiation regression patterns after cobalt plaque insertion for retinoblastoma

    International Nuclear Information System (INIS)

    Buys, R.J.; Abramson, D.H.; Ellsworth, R.M.; Haik, B.

    1983-01-01

    An analysis of 31 eyes of 30 patients who had been treated with cobalt plaques for retinoblastoma disclosed that a type I radiation regression pattern developed in 15 patients; type II, in one patient, and type III, in five patients. Nine patients had a regression pattern characterized by complete destruction of the tumor, the surrounding choroid, and all of the vessels in the area into which the plaque was inserted. This resulting white scar, corresponding to the sclerae only, was classified as a type IV radiation regression pattern. There was no evidence of tumor recurrence in patients with type IV regression patterns, with an average follow-up of 6.5 years, after receiving cobalt plaque therapy. Twenty-nine of these 30 patients had been unsuccessfully treated with at least one other modality (ie, light coagulation, cryotherapy, external beam radiation, or chemotherapy)

  19. Radiation regression patterns after cobalt plaque insertion for retinoblastoma

    Energy Technology Data Exchange (ETDEWEB)

    Buys, R.J.; Abramson, D.H.; Ellsworth, R.M.; Haik, B.

    1983-08-01

    An analysis of 31 eyes of 30 patients who had been treated with cobalt plaques for retinoblastoma disclosed that a type I radiation regression pattern developed in 15 patients; type II, in one patient, and type III, in five patients. Nine patients had a regression pattern characterized by complete destruction of the tumor, the surrounding choroid, and all of the vessels in the area into which the plaque was inserted. This resulting white scar, corresponding to the sclerae only, was classified as a type IV radiation regression pattern. There was no evidence of tumor recurrence in patients with type IV regression patterns, with an average follow-up of 6.5 years, after receiving cobalt plaque therapy. Twenty-nine of these 30 patients had been unsuccessfully treated with at least one other modality (ie, light coagulation, cryotherapy, external beam radiation, or chemotherapy).

  20. Job Surfing: Move On to Move Up.

    Science.gov (United States)

    Martin, Justin

    1997-01-01

    Looks at the process of switching jobs and changing careers. Discusses when to consider options and make the move as well as the need to be flexible and open minded. Provides a test for determining the chances of promotion and when to move on. (JOW)

  1. Moving standard deviation and moving sum of outliers as quality tools for monitoring analytical precision.

    Science.gov (United States)

    Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-01

    An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  2. The Initial Regression Statistical Characteristics of Intervals Between Zeros of Random Processes

    Directory of Open Access Journals (Sweden)

    V. K. Hohlov

    2014-01-01

    Full Text Available The article substantiates the initial regression statistical characteristics of intervals between zeros of realizing random processes, studies their properties allowing the use these features in the autonomous information systems (AIS of near location (NL. Coefficients of the initial regression (CIR to minimize the residual sum of squares of multiple initial regression views are justified on the basis of vector representations associated with a random vector notion of analyzed signal parameters. It is shown that even with no covariance-based private CIR it is possible to predict one random variable through another with respect to the deterministic components. The paper studies dependences of CIR interval sizes between zeros of the narrowband stationary in wide-sense random process with its energy spectrum. Particular CIR for random processes with Gaussian and rectangular energy spectra are obtained. It is shown that the considered CIRs do not depend on the average frequency of spectra, are determined by the relative bandwidth of the energy spectra, and weakly depend on the type of spectrum. CIR properties enable its use as an informative parameter when implementing temporary regression methods of signal processing, invariant to the average rate and variance of the input implementations. We consider estimates of the average energy spectrum frequency of the random stationary process by calculating the length of the time interval corresponding to the specified number of intervals between zeros. It is shown that the relative variance in estimation of the average energy spectrum frequency of stationary random process with increasing relative bandwidth ceases to depend on the last process implementation in processing above ten intervals between zeros. The obtained results can be used in the AIS NL to solve the tasks of detection and signal recognition, when a decision is made in conditions of unknown mathematical expectations on a limited observation

  3. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  4. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  5. Anticipatory Cyber Security Research: An Ultimate Technique for the First-Move Advantage

    Directory of Open Access Journals (Sweden)

    Bharat S.Rawal

    2016-02-01

    Full Text Available Across all industry segments, 96 percent of systems could be breached on average. In the game of cyber security, every moment a new player (attacker is entering the game with new skill sets. An attacker only needs to be effective once while defenders of cyberspace have to be successful all of the time. There will be a first-mover advantage in such a chasing game, which means that the first move often wins. In this paper, in order to face the security challenges brought in by attacker’s first move advantage, we analyzed the past ten years of cyber-attacks, studied the immediate attack’s pattern and offer the tools to predict the next move of the cyber attacker.

  6. Moving beyond regression techniques in cardiovascular risk prediction: applying machine learning to address analytic challenges.

    Science.gov (United States)

    Goldstein, Benjamin A; Navar, Ann Marie; Carter, Rickey E

    2017-06-14

    Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.

  7. Engineering Women’s Attitudes and Goals in Choosing Disciplines with above and Below Average Female Representation

    OpenAIRE

    Dina Verdín; Allison Godwin; Adam Kirn; Lisa Benson; Geoff Potvin

    2018-01-01

    Women’s participation in engineering remains well below that of men at all degree levels. However, despite the low enrollment of women in engineering as a whole, some engineering disciplines report above average female enrollment. We used multiple linear regression to examine the attitudes, beliefs, career outcome expectations, and career choice of first-year female engineering students enrolled in below average, average, and above average female representation disciplines in engineering. Our...

  8. Forecasting carbon dioxide emissions based on a hybrid of mixed data sampling regression model and back propagation neural network in the USA.

    Science.gov (United States)

    Zhao, Xin; Han, Meng; Ding, Lili; Calin, Adrian Cantemir

    2018-01-01

    The accurate forecast of carbon dioxide emissions is critical for policy makers to take proper measures to establish a low carbon society. This paper discusses a hybrid of the mixed data sampling (MIDAS) regression model and BP (back propagation) neural network (MIDAS-BP model) to forecast carbon dioxide emissions. Such analysis uses mixed frequency data to study the effects of quarterly economic growth on annual carbon dioxide emissions. The forecasting ability of MIDAS-BP is remarkably better than MIDAS, ordinary least square (OLS), polynomial distributed lags (PDL), autoregressive distributed lags (ADL), and auto-regressive moving average (ARMA) models. The MIDAS-BP model is suitable for forecasting carbon dioxide emissions for both the short and longer term. This research is expected to influence the methodology for forecasting carbon dioxide emissions by improving the forecast accuracy. Empirical results show that economic growth has both negative and positive effects on carbon dioxide emissions that last 15 quarters. Carbon dioxide emissions are also affected by their own change within 3 years. Therefore, there is a need for policy makers to explore an alternative way to develop the economy, especially applying new energy policies to establish a low carbon society.

  9. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  10. Moving related to separation : who moves and to what distance

    NARCIS (Netherlands)

    Mulder, Clara H.; Malmberg, Gunnar

    We address the issue of moving from the joint home on the occasion of separation. Our research question is: To what extent can the occurrence of moves related to separation, and the distance moved, be explained by ties to the location, resources, and other factors influencing the likelihood of

  11. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  12. Modelo digital do terreno através de diferentes interpolações do programa Surfer 12 | Digital terrain model through different interpolations in the surfer 12 software

    Directory of Open Access Journals (Sweden)

    José Machado

    2016-04-01

    the MDT interpolation of measured points is required. The use of TDM, 3D surfaces and contours in moving fast computer programs and can create some problems, such as the type of interpolation used. This work aims to analyze the interpolation methods in points quoted from an irregular geometric figure generated by the Surfer program. They used 12 interpolations available (Data Metrics, Inverse Distance, Kriging, Local Polynomial, Minimum Curvature, Modified Shepard Method, Moving Average, Natural Neighbor, Nearest Neighbor, Polynomial Regression, Radial fuction and Triangulation with Linear Interpolation and analyzed the generated topographic maps. The relief was generated graphical representation via the MDT. They were awarded the excellent concepts, excellent, good, average and bad representation of relief and discussed according Relief representations to the listed geometric image. Data Metrics, Polynomial Regression, Moving Average e Local Polynomial (bad; Moving Average e Modified Shepard Method (regular; Nearest Neighbor (media; Inverse Distance (good; Kriging e Radial Function (great e Triangulation With Linear Interpolation e Natural Neighbor (excellent conditions to representation presented dates.

  13. Evaluation of a multiple linear regression model and SARIMA model in forecasting heat demand for district heating system

    International Nuclear Information System (INIS)

    Fang, Tingting; Lahdelma, Risto

    2016-01-01

    Highlights: • Social factor is considered for the linear regression models besides weather file. • Simultaneously optimize all the coefficients for linear regression models. • SARIMA combined with linear regression is used to forecast the heat demand. • The accuracy for both linear regression and time series models are evaluated. - Abstract: Forecasting heat demand is necessary for production and operation planning of district heating (DH) systems. In this study we first propose a simple regression model where the hourly outdoor temperature and wind speed forecast the heat demand. Weekly rhythm of heat consumption as a social component is added to the model to significantly improve the accuracy. The other type of model is the seasonal autoregressive integrated moving average (SARIMA) model with exogenous variables as a combination to take weather factors, and the historical heat consumption data as depending variables. One outstanding advantage of the model is that it peruses the high accuracy for both long-term and short-term forecast by considering both exogenous factors and time series. The forecasting performance of both linear regression models and time series model are evaluated based on real-life heat demand data for the city of Espoo in Finland by out-of-sample tests for the last 20 full weeks of the year. The results indicate that the proposed linear regression model (T168h) using 168-h demand pattern with midweek holidays classified as Saturdays or Sundays gives the highest accuracy and strong robustness among all the tested models based on the tested forecasting horizon and corresponding data. Considering the parsimony of the input, the ease of use and the high accuracy, the proposed T168h model is the best in practice. The heat demand forecasting model can also be developed for individual buildings if automated meter reading customer measurements are available. This would allow forecasting the heat demand based on more accurate heat consumption

  14. Optimization of Game Formats in U-10 Soccer Using Logistic Regression Analysis

    Directory of Open Access Journals (Sweden)

    Amatria Mario

    2016-12-01

    Full Text Available Small-sided games provide young soccer players with better opportunities to develop their skills and progress as individual and team players. There is, however, little evidence on the effectiveness of different game formats in different age groups, and furthermore, these formats can vary between and even within countries. The Royal Spanish Soccer Association replaced the traditional grassroots 7-a-side format (F-7 with the 8-a-side format (F-8 in the 2011-12 season and the country’s regional federations gradually followed suit. The aim of this observational methodology study was to investigate which of these formats best suited the learning needs of U-10 players transitioning from 5-aside futsal. We built a multiple logistic regression model to predict the success of offensive moves depending on the game format and the area of the pitch in which the move was initiated. Success was defined as a shot at the goal. We also built two simple logistic regression models to evaluate how the game format influenced the acquisition of technicaltactical skills. It was found that the probability of a shot at the goal was higher in F-7 than in F-8 for moves initiated in the Creation Sector-Own Half (0.08 vs 0.07 and the Creation Sector-Opponent's Half (0.18 vs 0.16. The probability was the same (0.04 in the Safety Sector. Children also had more opportunities to control the ball and pass or take a shot in the F-7 format (0.24 vs 0.20, and these were also more likely to be successful in this format (0.28 vs 0.19.

  15. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  16. Move up,Move out

    Institute of Scientific and Technical Information of China (English)

    Guo Yan

    2007-01-01

    @@ China has already become the world's largest manufacturer of cement,copper and steel.Chinese producers have moved onto the world stage and dominated the global consumer market from textiles to electronics with amazing speed and efficiency.

  17. Occlusal consequence of using average condylar guidance settings: An in vitro study.

    Science.gov (United States)

    Lee, Wonsup; Lim, Young-Jun; Kim, Myung-Joo; Kwon, Ho-Beom

    2017-04-01

    A simplified mounting technique that adopts an average condylar guidance has been advocated. Despite this, the experimental explanation of how average settings differ from individual condylar guidance remains unclear. The purpose of this in vitro study was to examine potential occlusal error by using average condylar guidance settings during nonworking side movement of the articulator. Three-dimensional positions of the nonworking side maxillary first molar at various condylar and incisal settings were traced using a laser displacement sensor attached to the motorized stages with biaxial freedom of movement. To examine clinically relevant occlusal consequences of condylar guidance setting errors, the vertical occlusal error was defined as the vertical-axis positional difference between the average setting trace and the other condylar guidance setting trace. In addition, the respective contribution of the condylar and incisal guidance to the position of the maxillary first molar area was analyzed by multiple regression analysis using the resultant coordinate data. Alteration from individual to average settings led to a positional difference in the maxillary first molar nonworking side movement. When the individual setting was lower than average, vertical occlusal error occurred, which might cause occlusal interference. The vertical occlusal error ranged from -2964 to 1711 μm. In addition, the occlusal effect of incisal guidance was measured as a partial regression coefficient of 0.882, which exceeded the effect of condylar guidance, 0.431. Potential occlusal error as a result of adopting an average condylar guidance setting was observed. The occlusal effect of incisal guidance doubled the effect of condylar guidance. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  18. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  19. Estimating the exceedance probability of rain rate by logistic regression

    Science.gov (United States)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  20. Role of moving planes and moving spheres following Dupin cyclides

    KAUST Repository

    Jia, Xiaohong

    2014-03-01

    We provide explicit representations of three moving planes that form a μ-basis for a standard Dupin cyclide. We also show how to compute μ-bases for Dupin cyclides in general position and orientation from their implicit equations. In addition, we describe the role of moving planes and moving spheres in bridging between the implicit and rational parametric representations of these cyclides. © 2014 Elsevier B.V.

  1. Role of moving planes and moving spheres following Dupin cyclides

    KAUST Repository

    Jia, Xiaohong

    2014-01-01

    We provide explicit representations of three moving planes that form a μ-basis for a standard Dupin cyclide. We also show how to compute μ-bases for Dupin cyclides in general position and orientation from their implicit equations. In addition, we describe the role of moving planes and moving spheres in bridging between the implicit and rational parametric representations of these cyclides. © 2014 Elsevier B.V.

  2. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    Energy Technology Data Exchange (ETDEWEB)

    Verdin, Kristine L. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from the EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.

  3. Automatic Moving Object Segmentation for Freely Moving Cameras

    Directory of Open Access Journals (Sweden)

    Yanli Wan

    2014-01-01

    Full Text Available This paper proposes a new moving object segmentation algorithm for freely moving cameras which is very common for the outdoor surveillance system, the car build-in surveillance system, and the robot navigation system. A two-layer based affine transformation model optimization method is proposed for camera compensation purpose, where the outer layer iteration is used to filter the non-background feature points, and the inner layer iteration is used to estimate a refined affine model based on the RANSAC method. Then the feature points are classified into foreground and background according to the detected motion information. A geodesic based graph cut algorithm is then employed to extract the moving foreground based on the classified features. Unlike the existing global optimization or the long term feature point tracking based method, our algorithm only performs on two successive frames to segment the moving foreground, which makes it suitable for the online video processing applications. The experiment results demonstrate the effectiveness of our algorithm in both of the high accuracy and the fast speed.

  4. Moving event and moving participant in aspectual conceptions

    Directory of Open Access Journals (Sweden)

    Izutsu Katsunobu

    2016-06-01

    Full Text Available This study advances an analysis of the event conception of aspectual forms in four East Asian languages: Ainu, Japanese, Korean, and Ryukyuan. As earlier studies point out, event conceptions can be divided into two major types: the moving-event type and the moving-participant type, respectively. All aspectual forms in Ainu and Korean, and most forms in Japanese and Ryukyuan are based on that type of event conception. Moving-participant oriented Ainu and movingevent oriented Japanese occupy two extremes, between which Korean and Ryukyuan stand. Notwithstanding the geographical relationships among the four languages, Ryukyuan is closer to Ainu than to Korean, whereas Korean is closer to Ainu than to Japanese.

  5. Methodology for the AutoRegressive Planet Search (ARPS) Project

    Science.gov (United States)

    Feigelson, Eric; Caceres, Gabriel; ARPS Collaboration

    2018-01-01

    The detection of periodic signals of transiting exoplanets is often impeded by the presence of aperiodic photometric variations. This variability is intrinsic to the host star in space-based observations (typically arising from magnetic activity) and from observational conditions in ground-based observations. The most common statistical procedures to remove stellar variations are nonparametric, such as wavelet decomposition or Gaussian Processes regression. However, many stars display variability with autoregressive properties, wherein later flux values are correlated with previous ones. Providing the time series is evenly spaced, parametric autoregressive models can prove very effective. Here we present the methodology of the Autoregessive Planet Search (ARPS) project which uses Autoregressive Integrated Moving Average (ARIMA) models to treat a wide variety of stochastic short-memory processes, as well as nonstationarity. Additionally, we introduce a planet-search algorithm to detect periodic transits in the time-series residuals after application of ARIMA models. Our matched-filter algorithm, the Transit Comb Filter (TCF), replaces the traditional box-fitting step. We construct a periodogram based on the TCF to concentrate the signal of these periodic spikes. Various features of the original light curves, the ARIMA fits, the TCF periodograms, and folded light curves at peaks of the TCF periodogram can then be collected to provide constraints for planet detection. These features provide input into a multivariate classifier when a training set is available. The ARPS procedure has been applied NASA's Kepler mission observations of ~200,000 stars (Caceres, Dissertation Talk, this meeting) and will be applied in the future to other datasets.

  6. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  7. Hybrid Heat Capacity - Moving Slab Laser Concept

    International Nuclear Information System (INIS)

    Stappaerts, E A

    2002-01-01

    A hybrid configuration of a heat capacity laser (HCL) and a moving slab laser (MSL) has been studied. Multiple volumes of solid-state laser material are sequentially diode-pumped and their energy extracted. When a volume reaches a maximum temperature after a ''sub-magazine depth'', it is moved out of the pumping region into a cooling region, and a new volume is introduced. The total magazine depth equals the submagazine depth times the number of volumes. The design parameters are chosen to provide high duty factor operation, resulting in effective use of the diode arrays. The concept significantly reduces diode array cost over conventional heat capacity lasers, and it is considered enabling for many potential applications. A conceptual design study of the hybrid configuration has been carried out. Three concepts were evaluated using CAD tools. The concepts are described and their relative merits discussed. Because of reduced disk size and diode cost, the hybrid concept may allow scaling to average powers on the order of 0.5 MW/module

  8. Journal of Chemical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Decision tree, random forest, moving average analysis (MAA), multiple linear regression (MLR), partial least square regression (PLSR) and principal component regression (PCR) were used to develop models for prediction of CDK4 inhibitory activity. The statistical significance of models was assessed through specificity, ...

  9. One-dimensional quantum walk with a moving boundary

    International Nuclear Information System (INIS)

    Kwek, Leong Chuan; Setiawan

    2011-01-01

    Quantum walks are interesting models with potential applications to quantum algorithms and physical processes such as photosynthesis. In this paper, we study two models of one-dimensional quantum walks, namely, quantum walks with a moving absorbing wall and quantum walks with one stationary and one moving absorbing wall. For the former, we calculate numerically the survival probability, the rate of change of average position, and the rate of change of standard deviation of the particle's position in the long time limit for different wall velocities. Moreover, we also study the asymptotic behavior and the dependence of the survival probability on the initial particle's state. While for the latter, we compute the absorption probability of the right stationary wall for different velocities and initial positions of the left wall boundary. The results for these two models are compared with those obtained for the classical model. The difference between the results obtained for the quantum and classical models can be attributed to the difference in the probability distributions.

  10. What is new about covered interest parity condition in the European Union? Evidence from fractal cross-correlation regressions

    Science.gov (United States)

    Ferreira, Paulo; Kristoufek, Ladislav

    2017-11-01

    We analyse the covered interest parity (CIP) using two novel regression frameworks based on cross-correlation analysis (detrended cross-correlation analysis and detrending moving-average cross-correlation analysis), which allow for studying the relationships at different scales and work well under non-stationarity and heavy tails. CIP is a measure of capital mobility commonly used to analyse financial integration, which remains an interesting feature of study in the context of the European Union. The importance of this features is related to the fact that the adoption of a common currency is associated with some benefits for countries, but also involves some risks such as the loss of economic instruments to face possible asymmetric shocks. While studying the Eurozone members could explain some problems in the common currency, studying the non-Euro countries is important to analyse if they are fit to take the possible benefits. Our results point to the CIP verification mainly in the Central European countries while in the remaining countries, the verification of the parity is only residual.

  11. Domestic Multinationals and Foreign-Owned Firms in Italy: Evidence from Quantile Regression

    Directory of Open Access Journals (Sweden)

    Grasseni, Mara

    2010-06-01

    Full Text Available This paper investigates the performance differences across and within foreign-owned firms and domestic multinationals in Italy. Used for the empirical analysis are non-parametric tests based on the concept of first order stochastic dominance and quantile regression technique. The firm-level analysis distinguishes between foreign-owned firms of different nationalities and domestic MNEs according to the location of their FDI, and it focuses not only on productivity but also on differences in average wages, capital intensity, and financial and non-financial indicators, namely ROS, ROI and debt leverage. Overall, the results provide evidence of remarkable heterogeneity across and within multinationals. In particular, it seems not possible to identify a clear foreign advantage at least in terms of productivity, because foreign-owned firms do not outperform domestic multinationals. Interesting results are obtained when focusing on ROS and ROI, where the profitability gaps change as one moves from the bottom to the top of the conditional distribution. Domestic multinationals investing only in developed countries present higher ROS and ROI compared with the subgroups of foreign-owned firms, but only at the lower quantiles, while at the upper quantiles the advantage seems to favour foreign firms. Finally, in regard to domestic multinationals, there is strong evidence that those active only in less developed countries persistently exhibit the worst performances

  12. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  13. A comparison of moving object detection methods for real-time moving object detection

    Science.gov (United States)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  14. Watershed regressions for pesticides (warp) models for predicting atrazine concentrations in Corn Belt streams

    Science.gov (United States)

    Stone, Wesley W.; Gilliom, Robert J.

    2012-01-01

    Watershed Regressions for Pesticides (WARP) models, previously developed for atrazine at the national scale, are improved for application to the United States (U.S.) Corn Belt region by developing region-specific models that include watershed characteristics that are influential in predicting atrazine concentration statistics within the Corn Belt. WARP models for the Corn Belt (WARP-CB) were developed for annual maximum moving-average (14-, 21-, 30-, 60-, and 90-day durations) and annual 95th-percentile atrazine concentrations in streams of the Corn Belt region. The WARP-CB models accounted for 53 to 62% of the variability in the various concentration statistics among the model-development sites. Model predictions were within a factor of 5 of the observed concentration statistic for over 90% of the model-development sites. The WARP-CB residuals and uncertainty are lower than those of the National WARP model for the same sites. Although atrazine-use intensity is the most important explanatory variable in the National WARP models, it is not a significant variable in the WARP-CB models. The WARP-CB models provide improved predictions for Corn Belt streams draining watersheds with atrazine-use intensities of 17 kg/km2 of watershed area or greater.

  15. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  16. An Investigation of the Fit of Linear Regression Models to Data from an SAT[R] Validity Study. Research Report 2011-3

    Science.gov (United States)

    Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael

    2011-01-01

    This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…

  17. Multiple linear regression and regression with time series error models in forecasting PM10 concentrations in Peninsular Malaysia.

    Science.gov (United States)

    Ng, Kar Yong; Awang, Norhashidah

    2018-01-06

    Frequent haze occurrences in Malaysia have made the management of PM 10 (particulate matter with aerodynamic less than 10 μm) pollution a critical task. This requires knowledge on factors associating with PM 10 variation and good forecast of PM 10 concentrations. Hence, this paper demonstrates the prediction of 1-day-ahead daily average PM 10 concentrations based on predictor variables including meteorological parameters and gaseous pollutants. Three different models were built. They were multiple linear regression (MLR) model with lagged predictor variables (MLR1), MLR model with lagged predictor variables and PM 10 concentrations (MLR2) and regression with time series error (RTSE) model. The findings revealed that humidity, temperature, wind speed, wind direction, carbon monoxide and ozone were the main factors explaining the PM 10 variation in Peninsular Malaysia. Comparison among the three models showed that MLR2 model was on a same level with RTSE model in terms of forecasting accuracy, while MLR1 model was the worst.

  18. Advanced colorectal neoplasia risk stratification by penalized logistic regression.

    Science.gov (United States)

    Lin, Yunzhi; Yu, Menggang; Wang, Sijian; Chappell, Richard; Imperiale, Thomas F

    2016-08-01

    Colorectal cancer is the second leading cause of death from cancer in the United States. To facilitate the efficiency of colorectal cancer screening, there is a need to stratify risk for colorectal cancer among the 90% of US residents who are considered "average risk." In this article, we investigate such risk stratification rules for advanced colorectal neoplasia (colorectal cancer and advanced, precancerous polyps). We use a recently completed large cohort study of subjects who underwent a first screening colonoscopy. Logistic regression models have been used in the literature to estimate the risk of advanced colorectal neoplasia based on quantifiable risk factors. However, logistic regression may be prone to overfitting and instability in variable selection. Since most of the risk factors in our study have several categories, it was tempting to collapse these categories into fewer risk groups. We propose a penalized logistic regression method that automatically and simultaneously selects variables, groups categories, and estimates their coefficients by penalizing the [Formula: see text]-norm of both the coefficients and their differences. Hence, it encourages sparsity in the categories, i.e. grouping of the categories, and sparsity in the variables, i.e. variable selection. We apply the penalized logistic regression method to our data. The important variables are selected, with close categories simultaneously grouped, by penalized regression models with and without the interactions terms. The models are validated with 10-fold cross-validation. The receiver operating characteristic curves of the penalized regression models dominate the receiver operating characteristic curve of naive logistic regressions, indicating a superior discriminative performance. © The Author(s) 2013.

  19. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    Science.gov (United States)

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  20. Research on measurement method of optical camouflage effect of moving object

    Science.gov (United States)

    Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen

    2016-10-01

    Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.

  1. Evidence of redshifts in the average solar line profiles of C IV and Si IV from OSO-8 observations

    Science.gov (United States)

    Roussel-Dupre, D.; Shine, R. A.

    1982-01-01

    Line profiles of C IV and Si V obtained by the Colorado spectrometer on OSO-8 are presented. It is shown that the mean profiles are redshifted with a magnitude varying from 6-20 km/s, and with a mean of 12 km/s. An apparent average downflow of material in the 50,000-100,000 K temperature range is measured. The redshifts are observed in the line center positions of spatially and temporally averaged profiles and are measured either relative to chromospheric Si I lines or from a comparison of sun center and limb profiles. The observations of 6-20 km/s redshifts place constraints on the mechanisms that dominate EUV line emission since it requires a strong weighting of the emission in regions of downward moving material, and since there is little evidence for corresponding upward moving materials in these lines.

  2. Estimation of Geographically Weighted Regression Case Study on Wet Land Paddy Productivities in Tulungagung Regency

    Directory of Open Access Journals (Sweden)

    Danang Ariyanto

    2017-11-01

    Full Text Available Regression is a method connected independent variable and dependent variable with estimation parameter as an output. Principal problem in this method is its application in spatial data. Geographically Weighted Regression (GWR method used to solve the problem. GWR  is a regression technique that extends the traditional regression framework by allowing the estimation of local rather than global parameters. In other words, GWR runs a regression for each location, instead of a sole regression for the entire study area. The purpose of this research is to analyze the factors influencing wet land paddy productivities in Tulungagung Regency. The methods used in this research is  GWR using cross validation  bandwidth and weighted by adaptive Gaussian kernel fungtion.This research using  4 variables which are presumed affecting the wet land paddy productivities such as:  the rate of rainfall(X1, the average cost of fertilizer per hectare(X2, the average cost of pestisides per hectare(X3 and Allocation of subsidized NPK fertilizer of food crops sub-sector(X4. Based on the result, X1, X2, X3 and X4  has a different effect on each Distric. So, to improve the productivity of wet land paddy in Tulungagung Regency required a special policy based on the GWR model in each distric.

  3. Correlation between tumor regression grade and rectal volume in neoadjuvant concurrent chemoradiotherapy for rectal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hong Seok; Choi, Doo Ho; Park, Hee Chul; Park, Won; Yu, Jeong Il; Chung, Kwang Zoo [Dept. of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2016-09-15

    To determine whether large rectal volume on planning computed tomography (CT) results in lower tumor regression grade (TRG) after neoadjuvant concurrent chemoradiotherapy (CCRT) in rectal cancer patients. We reviewed medical records of 113 patients treated with surgery following neoadjuvant CCRT for rectal cancer between January and December 2012. Rectal volume was contoured on axial images in which gross tumor volume was included. Average axial rectal area (ARA) was defined as rectal volume divided by longitudinal tumor length. The impact of rectal volume and ARA on TRG was assessed. Average rectal volume and ARA were 11.3 mL and 2.9 cm². After completion of neoadjuvant CCRT in 113 patients, pathologic results revealed total regression (TRG 4) in 28 patients (25%), good regression (TRG 3) in 25 patients (22%), moderate regression (TRG 2) in 34 patients (30%), minor regression (TRG 1) in 24 patients (21%), and no regression (TRG0) in 2 patients (2%). No difference of rectal volume and ARA was found between each TRG groups. Linear correlation existed between rectal volume and TRG (p = 0.036) but not between ARA and TRG (p = 0.058). Rectal volume on planning CT has no significance on TRG in patients receiving neoadjuvant CCRT for rectal cancer. These results indicate that maintaining minimal rectal volume before each treatment may not be necessary.

  4. Correlation between tumor regression grade and rectal volume in neoadjuvant concurrent chemoradiotherapy for rectal cancer

    International Nuclear Information System (INIS)

    Lee, Hong Seok; Choi, Doo Ho; Park, Hee Chul; Park, Won; Yu, Jeong Il; Chung, Kwang Zoo

    2016-01-01

    To determine whether large rectal volume on planning computed tomography (CT) results in lower tumor regression grade (TRG) after neoadjuvant concurrent chemoradiotherapy (CCRT) in rectal cancer patients. We reviewed medical records of 113 patients treated with surgery following neoadjuvant CCRT for rectal cancer between January and December 2012. Rectal volume was contoured on axial images in which gross tumor volume was included. Average axial rectal area (ARA) was defined as rectal volume divided by longitudinal tumor length. The impact of rectal volume and ARA on TRG was assessed. Average rectal volume and ARA were 11.3 mL and 2.9 cm². After completion of neoadjuvant CCRT in 113 patients, pathologic results revealed total regression (TRG 4) in 28 patients (25%), good regression (TRG 3) in 25 patients (22%), moderate regression (TRG 2) in 34 patients (30%), minor regression (TRG 1) in 24 patients (21%), and no regression (TRG0) in 2 patients (2%). No difference of rectal volume and ARA was found between each TRG groups. Linear correlation existed between rectal volume and TRG (p = 0.036) but not between ARA and TRG (p = 0.058). Rectal volume on planning CT has no significance on TRG in patients receiving neoadjuvant CCRT for rectal cancer. These results indicate that maintaining minimal rectal volume before each treatment may not be necessary

  5. SAR Ground Moving Target Indication Based on Relative Residue of DPCA Processing

    Directory of Open Access Journals (Sweden)

    Jia Xu

    2016-10-01

    Full Text Available For modern synthetic aperture radar (SAR, it has much more urgent demands on ground moving target indication (GMTI, which includes not only the point moving targets like cars, truck or tanks but also the distributed moving targets like river or ocean surfaces. Among the existing GMTI methods, displaced phase center antenna (DPCA can effectively cancel the strong ground clutter and has been widely used. However, its detection performance is closely related to the target’s signal-to-clutter ratio (SCR as well as radial velocity, and it cannot effectively detect the weak large-sized river surfaces in strong ground clutter due to their low SCR caused by specular scattering. This paper proposes a novel method called relative residue of DPCA (RR-DPCA, which jointly utilizes the DPCA cancellation outputs and the multi-look images to improve the detection performance of weak river surfaces. Furthermore, based on the statistics analysis of the RR-DPCA outputs on the homogenous background, the cell average (CA method can be well applied for subsequent constant false alarm rate (CFAR detection. The proposed RR-DPCA method can well detect the point moving targets and distributed moving targets simultaneously. Finally, the results of both simulated and real data are provided to demonstrate the effectiveness of the proposed SAR/GMTI method.

  6. Arcuate Fasciculus in Autism Spectrum Disorder Toddlers with Language Regression

    Directory of Open Access Journals (Sweden)

    Zhang Lin

    2018-03-01

    Full Text Available Language regression is observed in a subset of toddlers with autism spectrum disorder (ASD as initial symptom. However, such a phenomenon has not been fully explored, partly due to the lack of definite diagnostic evaluation methods and criteria. Materials and Methods: Fifteen toddlers with ASD exhibiting language regression and fourteen age-matched typically developing (TD controls underwent diffusion tensor imaging (DTI. DTI parameters including fractional anisotropy (FA, average fiber length (AFL, tract volume (TV and number of voxels (NV were analyzed by Neuro 3D in Siemens syngo workstation. Subsequently, the data were analyzed by using IBM SPSS Statistics 22. Results: Compared with TD children, a significant reduction of FA along with an increase in TV and NV was observed in ASD children with language regression. Note that there were no significant differences between ASD and TD children in AFL of the arcuate fasciculus (AF. Conclusions: These DTI changes in the AF suggest that microstructural anomalies of the AF white matter may be associated with language deficits in ASD children exhibiting language regression starting from an early age.

  7. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  8. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  9. Regression of oral lichenoid lesions after replacement of dental restorations.

    Science.gov (United States)

    Mårell, L; Tillberg, A; Widman, L; Bergdahl, J; Berglund, A

    2014-05-01

    The aim of the study was to determine the prognosis and to evaluate the regression of lichenoid contact reactions (LCR) and oral lichen planus (OLP) after replacement of dental restorative materials suspected as causing the lesions. Forty-four referred patients with oral lesions participated in a follow-up study that was initiated an average of 6 years after the first examination at the Department of Odontology, i.e. the baseline examination. The patients underwent odontological clinical examination and answered a questionnaire with questions regarding dental health, medical and psychological health, and treatments undertaken from baseline to follow-up. After exchange of dental materials, regression of oral lesions was significantly higher among patients with LCR than with OLP. As no cases with OLP regressed after an exchange of materials, a proper diagnosis has to be made to avoid unnecessary exchanges of intact restorations on patients with OLP.

  10. Embodied affectivity: On moving and being moved

    Directory of Open Access Journals (Sweden)

    Thomas eFuchs

    2014-06-01

    Full Text Available There is a growing body of research indicating that bodily sensation and behaviour strongly influences one’s emotional reaction towards certain situations or objects. On this background, a framework model of embodied affectivity is suggested: we regard emotions as resulting from the circular interaction between affective qualities or affordances in the environment and the subject’s bodily resonance, be it in the form of sensations, postures, expressive movements or movement tendencies. Motion and emotion are thus intrinsically connected: one is moved by movement (perception; impression; affection and moved to move (action; expression; e-motion. Through its resonance, the body functions as a medium of emotional perception: it colours or charges self-experience and the environment with affective valences while it remains itself in the background of one’s own awareness. This model is then applied to emotional social understanding or interaffectivity which is regarded as an intertwinement of two cycles of embodied affectivity, thus continuously modifying each partner’s affective affordances and bodily resonance. We conclude with considerations of how embodied affectivity is altered in psychopathology and can be addressed in psychotherapy of the embodied self.

  11. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  12. Method and apparatus for a combination moving bed thermal treatment reactor and moving bed filter

    Energy Technology Data Exchange (ETDEWEB)

    Badger, Phillip C.; Dunn, Jr., Kenneth J.

    2015-09-01

    A moving bed gasification/thermal treatment reactor includes a geometry in which moving bed reactor particles serve as both a moving bed filter and a heat carrier to provide thermal energy for thermal treatment reactions, such that the moving bed filter and the heat carrier are one and the same to remove solid particulates or droplets generated by thermal treatment processes or injected into the moving bed filter from other sources.

  13. Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method

    Science.gov (United States)

    Prahutama, Alan; Sudarno

    2018-05-01

    The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).

  14. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Analysis of the Dose Distribution of Moving Organ using a Moving Phantom System

    International Nuclear Information System (INIS)

    Kim, Yon Lae; Park, Byung Moon; Bae, Yong Ki; Kang, Min Young; Bang, Dong Wan; Lee, Gui Won

    2006-01-01

    Few researches have been performed on the dose distribution of the moving organ for radiotherapy so far. In order to simulate the organ motion caused by respiratory function, multipurpose phantom and moving device was used and dosimetric measurements for dose distribution of the moving organs were conducted in this study. The purpose of our study was to evaluate how dose distributions are changed due to respiratory motion. A multipurpose phantom and a moving device were developed for the measurement of the dose distribution of the moving organ due to respiratory function. Acryl chosen design of the phantom was considered the most obvious choice for phantom material. For construction of the phantom, we used acryl and cork with density of 1.14 g/cm 3 , 0.32 g/cm 3 respectively. Acryl and cork slab in the phantom were used to simulate the normal organ and lung respectively. The moving phantom system was composed of moving device, moving control system, and acryl and cork phantom. Gafchromic film and EDR2 film were used to measure dose distributions. The moving device system may be driven by two directional step motors and able to perform 2 dimensional movements (x, z axis), but only 1 dimensional movement(z axis) was used for this study. Larger penumbra was shown in the cork phantom than in the acryl phantom. The dose profile and isodose curve of Gafchromic EBT film were not uniform since the film has small optical density responding to the dose. As the organ motion was increased, the blurrings in penumbra, flatness, and symmetry were increased. Most of measurements of dose distributions, Gafchromic EBT film has poor flatness and symmetry than EDR2 film, but both penumbra distributions were more or less comparable. The Gafchromic EBT film is more useful as it does not need development and more radiation dose could be exposed than EDR2 film without losing film characteristics. But as response of the optical density of Gafchromic EBT film to dose is low, beam profiles

  16. Do Nondomestic Undergraduates Choose a Major Field in Order to Maximize Grade Point Averages?

    Science.gov (United States)

    Bergman, Matthew E.; Fass-Holmes, Barry

    2016-01-01

    The authors investigated whether undergraduates attending an American West Coast public university who were not U.S. citizens (nondomestic) maximized their grade point averages (GPA) through their choice of major field. Multiple regression hierarchical linear modeling analyses showed that major field's effect size was small for these…

  17. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  18. Comparing lagged linear correlation, lagged regression, Granger causality, and vector autoregression for uncovering associations in EHR data.

    Science.gov (United States)

    Levine, Matthew E; Albers, David J; Hripcsak, George

    2016-01-01

    Time series analysis methods have been shown to reveal clinical and biological associations in data collected in the electronic health record. We wish to develop reliable high-throughput methods for identifying adverse drug effects that are easy to implement and produce readily interpretable results. To move toward this goal, we used univariate and multivariate lagged regression models to investigate associations between twenty pairs of drug orders and laboratory measurements. Multivariate lagged regression models exhibited higher sensitivity and specificity than univariate lagged regression in the 20 examples, and incorporating autoregressive terms for labs and drugs produced more robust signals in cases of known associations among the 20 example pairings. Moreover, including inpatient admission terms in the model attenuated the signals for some cases of unlikely associations, demonstrating how multivariate lagged regression models' explicit handling of context-based variables can provide a simple way to probe for health-care processes that confound analyses of EHR data.

  19. Detection of moving objects from a moving platform in urban scenes

    NARCIS (Netherlands)

    Haar, F.B. ter; Hollander, R.J.M. den; Dijk, J.

    2010-01-01

    Moving object detection in urban scenes is important for the guidance of autonomous vehicles, robot navigation, and monitoring. In this paper moving objects are automatically detected using three sequential frames and tracked over a longer period. To this extend we modify the plane+parallax,

  20. Regression Phalanxes

    OpenAIRE

    Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

    2017-01-01

    Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

  1. Financial Aid and First-Year Collegiate GPA: A Regression Discontinuity Approach

    Science.gov (United States)

    Curs, Bradley R.; Harper, Casandra E.

    2012-01-01

    Using a regression discontinuity design, we investigate whether a merit-based financial aid program has a causal effect on the first-year grade point average of first-time out-of-state freshmen at the University of Oregon. Our results indicate that merit-based financial aid has a positive and significant effect on first-year collegiate grade point…

  2. Changes of signal intensity on precontrast magnetic resonance imaging in spontaneously regressed lumbar disc herniation

    International Nuclear Information System (INIS)

    Okushima, Yuichiro; Chiba, Kazuhiro; Matsumoto, Morio; Maruiwa, Hirofumi; Nishizawa, Takashi; Toyama, Yoshiaki

    2001-01-01

    To see whether the MRI images can give a criterion for conservative therapy of the lumbar disc herniation, time changes of the images were retrospectively studied on 41 cases of spontaneous regression. They had the imaging diagnosis 3 times in average until regression with GE Signa equipment (1.5T). Images were evaluated by 2 experts. Certain cases accompanying the brightness change were seen during the process of regression. The period leading to the disappearance of melosalgia and to the regression tended to be short in cases with the brighter pulp center than disc and/or with less bright pulp verge than center. Further studies were thought necessary for clear conclusion. (K.H.)

  3. Low-resolution Airborne Radar Air/ground Moving Target Classification and Recognition

    Directory of Open Access Journals (Sweden)

    Wang Fu-you

    2014-10-01

    Full Text Available Radar Target Recognition (RTR is one of the most important needs of modern and future airborne surveillance radars, and it is still one of the key technologies of radar. The majority of present algorithms are based on wide-band radar signal, which not only needs high performance radar system and high target Signal-to-Noise Ratio (SNR, but also is sensitive to angle between radar and target. Low-Resolution Airborne Surveillance Radar (LRASR in downward-looking mode, slow flying aircraft and ground moving truck have similar Doppler velocity and Radar Cross Section (RCS, leading to the problem that LRASR air/ground moving targets can not be distinguished, which also disturbs detection, tracking, and classification of low altitude slow flying aircraft to solve these issues, an algorithm based on narrowband fractal feature and phase modulation feature is presented for LRASR air/ground moving targets classification. Real measured data is applied to verify the algorithm, the classification results validate the proposed method, helicopters and truck can be well classified, the average discrimination rate is more than 89% when SNR ≥ 15 dB.

  4. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    Energy Technology Data Exchange (ETDEWEB)

    Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  5. MOVES regional level sensitivity analysis

    Science.gov (United States)

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  6. Regression Methods for Virtual Metrology of Layer Thickness in Chemical Vapor Deposition

    DEFF Research Database (Denmark)

    Purwins, Hendrik; Barak, Bernd; Nagi, Ahmed

    2014-01-01

    The quality of wafer production in semiconductor manufacturing cannot always be monitored by a costly physical measurement. Instead of measuring a quantity directly, it can be predicted by a regression method (Virtual Metrology). In this paper, a survey on regression methods is given to predict...... average Silicon Nitride cap layer thickness for the Plasma Enhanced Chemical Vapor Deposition (PECVD) dual-layer metal passivation stack process. Process and production equipment Fault Detection and Classification (FDC) data are used as predictor variables. Various variable sets are compared: one most...... algorithm, and Support Vector Regression (SVR). On a test set, SVR outperforms the other methods by a large margin, being more robust towards changes in the production conditions. The method performs better on high-dimensional multivariate input data than on the most predictive variables alone. Process...

  7. Moving Field Guides

    Science.gov (United States)

    Cassie Meador; Mark Twery; Meagan. Leatherbury

    2011-01-01

    The Moving Field Guides (MFG) project is a creative take on site interpretation. Moving Field Guides provide an example of how scientific and artistic endeavors work in parallel. Both begin with keen observations that produce information that must be analyzed, understood, and interpreted. That interpretation then needs to be communicated to others to complete the...

  8. Effect of 3 Key Factors on Average End to End Delay and Jitter in MANET

    Directory of Open Access Journals (Sweden)

    Saqib Hakak

    2015-01-01

    Full Text Available A mobile ad-hoc network (MANET is a self-configuring infrastructure-less network of mobile devices connected by wireless links where each node or mobile device is independent to move in any desired direction and thus the links keep moving from one node to another. In such a network, the mobile nodes are equipped with CSMA/CA (carrier sense multiple access with collision avoidance transceivers and communicate with each other via radio. In MANETs, routing is considered one of the most difficult and challenging tasks. Because of this, most studies on MANETs have focused on comparing protocols under varying network conditions. But to the best of our knowledge no one has studied the effect of other factors on network performance indicators like throughput, jitter and so on, revealing how much influence a particular factor or group of factors has on each network performance indicator. Thus, in this study the effects of three key factors, i.e. routing protocol, packet size and DSSS rate, were evaluated on key network performance metrics, i.e. average delay and average jitter, as these parameters are crucial for network performance and directly affect the buffering requirements for all video devices and downstream networks.

  9. Code Red: Explaining Average Age of Death in the City of Hamilton

    Directory of Open Access Journals (Sweden)

    Patrick F. DeLuca

    2015-11-01

    Full Text Available The aim of this study is to identify the underlying factors that explain the average age of death in the City of Hamilton, Ontario, Canada, as identified in the Code Red Series of articles that were published in the city's local newspaper in 2010. Using a combination of data from the Canadian Census, the Government of Ontario and the Canadian Institute for Health Information, factor analysis was performed yielding three factors relating to poverty, working class, and health and aging. In a regression analysis these factors account for 42% of the total variability in the average ages of death observed at the census tract level of geography within the city.

  10. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  11. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  12. Use of multiple linear regression and logistic regression models to investigate changes in birthweight for term singleton infants in Scotland.

    Science.gov (United States)

    Bonellie, Sandra R

    2012-10-01

    To illustrate the use of regression and logistic regression models to investigate changes over time in size of babies particularly in relation to social deprivation, age of the mother and smoking. Mean birthweight has been found to be increasing in many countries in recent years, but there are still a group of babies who are born with low birthweights. Population-based retrospective cohort study. Multiple linear regression and logistic regression models are used to analyse data on term 'singleton births' from Scottish hospitals between 1994-2003. Mothers who smoke are shown to give birth to lighter babies on average, a difference of approximately 0.57 Standard deviations lower (95% confidence interval. 0.55-0.58) when adjusted for sex and parity. These mothers are also more likely to have babies that are low birthweight (odds ratio 3.46, 95% confidence interval 3.30-3.63) compared with non-smokers. Low birthweight is 30% more likely where the mother lives in the most deprived areas compared with the least deprived, (odds ratio 1.30, 95% confidence interval 1.21-1.40). Smoking during pregnancy is shown to have a detrimental effect on the size of infants at birth. This effect explains some, though not all, of the observed socioeconomic birthweight. It also explains much of the observed birthweight differences by the age of the mother.   Identifying mothers at greater risk of having a low birthweight baby as important implications for the care and advice this group receives. © 2012 Blackwell Publishing Ltd.

  13. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  14. Estimation of pure moving average vector models | Usoro ...

    African Journals Online (AJOL)

    International Journal of Natural and Applied Sciences. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 3, No 3 (2007) >. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT ...

  15. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    estimated using least squares and Newton Raphson iterative methods. To determine the order of the ... r is the degree of polynomial while j is the number of lag of the ..... use a real time series dataset, monthly rainfall and temperature series ...

  16. Predicting College Success: Achievement, Demographic, and Psychosocial Predictors of First-Semester College Grade Point Average

    Science.gov (United States)

    Saltonstall, Margot

    2013-01-01

    This study seeks to advance and expand research on college student success. Using multinomial logistic regression analysis, the study investigates the contribution of psychosocial variables above and beyond traditional achievement and demographic measures to predicting first-semester college grade point average (GPA). It also investigates if…

  17. Adaptive kernel regression for freehand 3D ultrasound reconstruction

    Science.gov (United States)

    Alshalalfah, Abdel-Latif; Daoud, Mohammad I.; Al-Najar, Mahasen

    2017-03-01

    Freehand three-dimensional (3D) ultrasound imaging enables low-cost and flexible 3D scanning of arbitrary-shaped organs, where the operator can freely move a two-dimensional (2D) ultrasound probe to acquire a sequence of tracked cross-sectional images of the anatomy. Often, the acquired 2D ultrasound images are irregularly and sparsely distributed in the 3D space. Several 3D reconstruction algorithms have been proposed to synthesize 3D ultrasound volumes based on the acquired 2D images. A challenging task during the reconstruction process is to preserve the texture patterns in the synthesized volume and ensure that all gaps in the volume are correctly filled. This paper presents an adaptive kernel regression algorithm that can effectively reconstruct high-quality freehand 3D ultrasound volumes. The algorithm employs a kernel regression model that enables nonparametric interpolation of the voxel gray-level values. The kernel size of the regression model is adaptively adjusted based on the characteristics of the voxel that is being interpolated. In particular, when the algorithm is employed to interpolate a voxel located in a region with dense ultrasound data samples, the size of the kernel is reduced to preserve the texture patterns. On the other hand, the size of the kernel is increased in areas that include large gaps to enable effective gap filling. The performance of the proposed algorithm was compared with seven previous interpolation approaches by synthesizing freehand 3D ultrasound volumes of a benign breast tumor. The experimental results show that the proposed algorithm outperforms the other interpolation approaches.

  18. Cellular automaton model in the fundamental diagram approach reproducing the synchronized outflow of wide moving jams

    International Nuclear Information System (INIS)

    Tian, Jun-fang; Yuan, Zhen-zhou; Jia, Bin; Fan, Hong-qiang; Wang, Tao

    2012-01-01

    Velocity effect and critical velocity are incorporated into the average space gap cellular automaton model [J.F. Tian, et al., Phys. A 391 (2012) 3129], which was able to reproduce many spatiotemporal dynamics reported by the three-phase theory except the synchronized outflow of wide moving jams. The physics of traffic breakdown has been explained. Various congested patterns induced by the on-ramp are reproduced. It is shown that the occurrence of synchronized outflow, free outflow of wide moving jams is closely related with drivers time delay in acceleration at the downstream jam front and the critical velocity, respectively. -- Highlights: ► Velocity effect is added into average space gap cellular automaton model. ► The physics of traffic breakdown has been explained. ► The probabilistic nature of traffic breakdown is simulated. ► Various congested patterns induced by the on-ramp are reproduced. ► The occurrence of synchronized outflow of jams depends on drivers time delay.

  19. Driving-forces model on individual behavior in scenarios considering moving threat agents

    Science.gov (United States)

    Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia

    2017-09-01

    The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.

  20. Resident characterization of better-than- and worse-than-average clinical teaching.

    Science.gov (United States)

    Haydar, Bishr; Charnin, Jonathan; Voepel-Lewis, Terri; Baker, Keith

    2014-01-01

    Clinical teachers and trainees share a common view of what constitutes excellent clinical teaching, but associations between these behaviors and high teaching scores have not been established. This study used residents' written feedback to their clinical teachers, to identify themes associated with above- or below-average teaching scores. All resident evaluations of their clinical supervisors in a single department were collected from January 1, 2007 until December 31, 2008. A mean teaching score assigned by each resident was calculated. Evaluations that were 20% higher or 15% lower than the resident's mean score were used. A subset of these evaluations was reviewed, generating a list of 28 themes for further study. Two researchers then, independently coded the presence or absence of these themes in each evaluation. Interrater reliability of the themes and logistic regression were used to evaluate the predictive associations of the themes with above- or below-average evaluations. Five hundred twenty-seven above-average and 285 below-average evaluations were evaluated for the presence or absence of 15 positive themes and 13 negative themes, which were divided into four categories: teaching, supervision, interpersonal, and feedback. Thirteen of 15 positive themes correlated with above-average evaluations and nine had high interrater reliability (Intraclass Correlation Coefficient >0.6). Twelve of 13 negative themes correlated with below-average evaluations, and all had high interrater reliability. On the basis of these findings, the authors developed 13 recommendations for clinical educators. The authors developed 13 recommendations for clinical teachers using the themes identified from the above- and below-average clinical teaching evaluations submitted by anesthesia residents.

  1. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  2. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.

    Science.gov (United States)

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-12-11

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.

  3. Moving Magnetic Features Around a Pore

    Energy Technology Data Exchange (ETDEWEB)

    Kaithakkal, A. J.; Riethmüller, T. L.; Solanki, S. K.; Lagg, A.; Barthol, P.; Gandorfer, A.; Gizon, L.; Hirzberger, J.; VanNoort, M. [Max Planck Institute for Solar System Research, Justus-von-Liebig-Weg 3, Göttingen D-37077 (Germany); Rodríguez, J. Blanco [Grupo de Astronomía y Ciencias del Espacio, Universidad de Valencia, E-46980 Paterna, Valencia (Spain); Iniesta, J. C. Del Toro; Suárez, D. Orozco [Instituto de Astrofísica de Andalucía (CSIC), Apartado de Correos 3004, E-18080 Granada (Spain); Schmidt, W. [Kiepenheuer-Institut für Sonnenphysik, Schöneckstr. 6, D-79104 Freiburg (Germany); Pillet, V. Martínez [National Solar Observatory, 3665 Discovery Drive, Boulder, CO 80303 (United States); Knölker, M., E-mail: anjali@mps.mpg.de [High Altitude Observatory, National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307-3000 (United States)

    2017-03-01

    Spectropolarimetric observations from Sunrise/IMaX, obtained in 2013 June, are used for a statistical analysis to determine the physical properties of moving magnetic features (MMFs) observed near a pore. MMFs of the same and opposite polarity, with respect to the pore, are found to stream from its border at an average speed of 1.3 km s{sup −1} and 1.2 km s{sup −1}, respectively, with mainly same-polarity MMFs found further away from the pore. MMFs of both polarities are found to harbor rather weak, inclined magnetic fields. Opposite-polarity MMFs are blueshifted, whereas same-polarity MMFs do not show any preference for up- or downflows. Most of the MMFs are found to be of sub-arcsecond size and carry a mean flux of ∼1.2 × 10{sup 17} Mx.

  4. Intermediate and advanced topics in multilevel logistic regression analysis.

    Science.gov (United States)

    Austin, Peter C; Merlo, Juan

    2017-09-10

    Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  5. A comparison of multiple regression and neural network techniques for mapping in situ pCO2 data

    International Nuclear Information System (INIS)

    Lefevre, Nathalie; Watson, Andrew J.; Watson, Adam R.

    2005-01-01

    Using about 138,000 measurements of surface pCO 2 in the Atlantic subpolar gyre (50-70 deg N, 60-10 deg W) during 1995-1997, we compare two methods of interpolation in space and time: a monthly distribution of surface pCO 2 constructed using multiple linear regressions on position and temperature, and a self-organizing neural network approach. Both methods confirm characteristics of the region found in previous work, i.e. the subpolar gyre is a sink for atmospheric CO 2 throughout the year, and exhibits a strong seasonal variability with the highest undersaturations occurring in spring and summer due to biological activity. As an annual average the surface pCO 2 is higher than estimates based on available syntheses of surface pCO 2 . This supports earlier suggestions that the sink of CO 2 in the Atlantic subpolar gyre has decreased over the last decade instead of increasing as previously assumed. The neural network is able to capture a more complex distribution than can be well represented by linear regressions, but both techniques agree relatively well on the average values of pCO 2 and derived fluxes. However, when both techniques are used with a subset of the data, the neural network predicts the remaining data to a much better accuracy than the regressions, with a residual standard deviation ranging from 3 to 11 μatm. The subpolar gyre is a net sink of CO 2 of 0.13 Gt-C/yr using the multiple linear regressions and 0.15 Gt-C/yr using the neural network, on average between 1995 and 1997. Both calculations were made with the NCEP monthly wind speeds converted to 10 m height and averaged between 1995 and 1997, and using the gas exchange coefficient of Wanninkhof

  6. Moving toroidal limiter

    International Nuclear Information System (INIS)

    Ikuta, Kazunari; Miyahara, Akira.

    1983-06-01

    The concept of the limiter-divertor proposed by Mirnov is extended to a toroidal limiter-divertor (which we call moving toroidal limiter) using the stream of ferromagnetic balls coated with a low Z materials such as plastics, graphite and ceramics. An important advantage of the use of the ferromagnetic materials would be possible soft landing of the balls on a catcher, provided that the temperature of the balls is below Curie point. Moreover, moving toroidal limiter would work as a protector of the first wall not only against the vertical movement of plasma ring but also against the violent inward motion driven by major disruption because the orbit of the ball in the case of moving toroidal limiter distributes over the small major radius side of the toroidal plasma. (author)

  7. Relative distance between tracers as a measure of diffusivity within moving aggregates

    Science.gov (United States)

    Pönisch, Wolfram; Zaburdaev, Vasily

    2018-02-01

    Tracking of particles, be it a passive tracer or an actively moving bacterium in the growing bacterial colony, is a powerful technique to probe the physical properties of the environment of the particles. One of the most common measures of particle motion driven by fluctuations and random forces is its diffusivity, which is routinely obtained by measuring the mean squared displacement of the particles. However, often the tracer particles may be moving in a domain or an aggregate which itself experiences some regular or random motion and thus masks the diffusivity of tracers. Here we provide a method for assessing the diffusivity of tracer particles within mobile aggregates by measuring the so-called mean squared relative distance (MSRD) between two tracers. We provide analytical expressions for both the ensemble and time averaged MSRD allowing for direct identification of diffusivities from experimental data.

  8. A Mixed-Methods Study Investigating the Relationship between Media Multitasking Orientation and Grade Point Average

    Science.gov (United States)

    Lee, Jennifer

    2012-01-01

    The intent of this study was to examine the relationship between media multitasking orientation and grade point average. The study utilized a mixed-methods approach to investigate the research questions. In the quantitative section of the study, the primary method of statistical analyses was multiple regression. The independent variables for the…

  9. Moving Matters: The Causal Effect of Moving Schools on Student Performance. Working Paper #01-15

    Science.gov (United States)

    Schwartz, Amy Ellen; Stiefel, Leanna; Cordes, Sarah A.

    2015-01-01

    The majority of existing research on mobility indicates that students do worse in the year of a school move. This research, however, has been unsuccessful in isolating the causal effects of mobility and often fails to distinguish the heterogeneous impacts of moves, conflating structural moves (mandated by a school's terminal grade) and…

  10. Percentiles of the run-length distribution of the Exponentially Weighted Moving Average (EWMA) median chart

    Science.gov (United States)

    Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.

    2017-09-01

    Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.

  11. Prediksi Jumlah Tamu Menginap di Hotel Karlita International, Tegal, Jawa Tengah

    Directory of Open Access Journals (Sweden)

    Haryadi Sarjono

    2013-11-01

    Full Text Available Article is forecasting comparative analysis of number of guess room occupancy at Karlita International Hotel, Tegal, Central Java using 11 forecasting methods: linear regression, moving average, weighted moving average, exponential smoothing, exponential smoothing with trend, naïve method, trend analysis, additive decomposition – CMA, additive decomposition – average all, multiplicative decomposition – CMA, multiplicative decomposition – average All. Article used 17 data from January 2012 to Mei 2013, and results after using those 11 methods were the smallest MAD is 101.69 and the smallest MSE is 15,163.95. From additive decomposition – average all method, data showed guess room occupancy forecast at Karlita International Hotel for June 2013 is 960 guess.

  12. Numerical analysis of droplet impingement using the moving particle semi-implicit method

    International Nuclear Information System (INIS)

    Xiong, Jinbiao; Koshizuka, Seiichi; Sakai, Mikio

    2010-01-01

    Droplet impingement onto a rigid wall is simulated in two and three dimensions using the moving particle semi-implicit method. In two-dimensional calculations, the convergence is achieved and the propagation of a shockwave in a droplet is captured. The average pressure on the contact area decreases gradually after the maximum value. The numerically obtained maximum average impact pressure agrees with the Heymann correlation. A large shear stress appears at the contact edge due to jetting. A parametric study shows that the droplet diameter has only a minor effect on the pressure load due to droplet impingement. When the impingement takes place from an impact angle of π/4 rad, the pressure load and shear stress show a dependence only on the normal velocity to the wall. A comparison between the three-dimensional and two-dimensional results shows that consideration of the three-dimensional effect can decrease the average impact pressure by about 12%. (author)

  13. Movements of a Sphere Moving Over Smooth and Rough Inclines

    Science.gov (United States)

    Jan, Chyan-Deng

    1992-01-01

    The steady movements of a sphere over a rough incline in air, and over smooth and rough inclines in a liquid were studied theoretically and experimentally. The principle of energy conservation was used to analyze the translation velocities, rolling resistances, and drag coefficients of a sphere moving over the inclines. The rolling resistance to the movement of a sphere from the rough incline was presumed to be caused by collisions and frictional slidings. A varnished wooden board was placed on the bottom of an experimental tilting flume to form a smooth incline and a layer of spheres identical to the sphere moving over them was placed on the smooth wooden board to form a rough incline. Spheres used in the experiments were glass spheres, steel spheres, and golf balls. Experiments show that a sphere moving over a rough incline with negligible fluid drag in air can reach a constant translation velocity. This constant velocity was found to be proportional to the bed inclination (between 11 ^circ and 21^circ) and the square root of the sphere's diameter, but seemingly independent of the sphere's specific gravity. Two empirical coefficients in the theoretical expression of the sphere's translation velocity were determined by experiments. The collision and friction parts of the shear stress exerted on the interface between the moving sphere and rough incline were determined. The ratio of collision to friction parts appears to increase with increase in the bed inclination. These two parts seem to be of the same order of magnitude. The rolling resistances and the relations between the drag coefficient and Reynolds number for a sphere moving over smooth and rough inclines in a liquid, such as water or salad oil, were determined by a regression analysis based on experimental data. It was found that the drag coefficient for a sphere over the rough incline is larger than that for a sphere over the smooth incline, and both of which are much larger than that for a sphere in free

  14. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  15. Application of Geostatistical Methods and Wavelets to the Analysis of Hyperspectral Imagery and the Testing of a Moving Variogram

    National Research Council Canada - National Science Library

    2000-01-01

    This is a first report of the project. It incorporates the report on an analysis completed in the previous project on moving averages, variances and variograms for NIR from a SPOT image of part of Fort A. P. Hill...

  16. Comparison of stochastic and regression based methods for quantification of predictive uncertainty of model-simulated wellhead protection zones in heterogeneous aquifers

    DEFF Research Database (Denmark)

    Christensen, Steen; Moore, C.; Doherty, J.

    2006-01-01

    accurate and required a few hundred model calls to be computed. (b) The linearized regression-based interval (Cooley, 2004) required just over a hundred model calls and also appeared to be nearly correct. (c) The calibration-constrained Monte-Carlo interval (Doherty, 2003) was found to be narrower than......For a synthetic case we computed three types of individual prediction intervals for the location of the aquifer entry point of a particle that moves through a heterogeneous aquifer and ends up in a pumping well. (a) The nonlinear regression-based interval (Cooley, 2004) was found to be nearly...... the regression-based intervals but required about half a million model calls. It is unclear whether or not this type of prediction interval is accurate....

  17. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    Science.gov (United States)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  18. Modeling of the Monthly Rainfall-Runoff Process Through Regressions

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2014-10-01

    Full Text Available To solve the problems associated with the assessment of water resources of a river, the modeling of the rainfall-runoff process (RRP allows the deduction of runoff missing data and to extend its record, since generally the information available on precipitation is larger. It also enables the estimation of inputs to reservoirs, when their building led to the suppression of the gauging station. The simplest mathematical model that can be set for the RRP is the linear regression or curve on a monthly basis. Such a model is described in detail and is calibrated with the simultaneous record of monthly rainfall and runoff in Ballesmi hydrometric station, which covers 35 years. Since the runoff of this station has an important contribution from the spring discharge, the record is corrected first by removing that contribution. In order to do this a procedure was developed based either on the monthly average regional runoff coefficients or on nearby and similar watershed; in this case the Tancuilín gauging station was used. Both stations belong to the Partial Hydrologic Region No. 26 (Lower Rio Panuco and are located within the state of San Luis Potosi, México. The study performed indicates that the monthly regression model, due to its conceptual approach, faithfully reproduces monthly average runoff volumes and achieves an excellent approximation in relation to the dispersion, proved by calculation of the means and standard deviations.

  19. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  20. On the Relationship Between Confidence Sets and Exchangeable Weights in Multiple Linear Regression.

    Science.gov (United States)

    Pek, Jolynn; Chalmers, R Philip; Monette, Georges

    2016-01-01

    When statistical models are employed to provide a parsimonious description of empirical relationships, the extent to which strong conclusions can be drawn rests on quantifying the uncertainty in parameter estimates. In multiple linear regression (MLR), regression weights carry two kinds of uncertainty represented by confidence sets (CSs) and exchangeable weights (EWs). Confidence sets quantify uncertainty in estimation whereas the set of EWs quantify uncertainty in the substantive interpretation of regression weights. As CSs and EWs share certain commonalities, we clarify the relationship between these two kinds of uncertainty about regression weights. We introduce a general framework describing how CSs and the set of EWs for regression weights are estimated from the likelihood-based and Wald-type approach, and establish the analytical relationship between CSs and sets of EWs. With empirical examples on posttraumatic growth of caregivers (Cadell et al., 2014; Schneider, Steele, Cadell & Hemsworth, 2011) and on graduate grade point average (Kuncel, Hezlett & Ones, 2001), we illustrate the usefulness of CSs and EWs for drawing strong scientific conclusions. We discuss the importance of considering both CSs and EWs as part of the scientific process, and provide an Online Appendix with R code for estimating Wald-type CSs and EWs for k regression weights.

  1. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Science.gov (United States)

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  2. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones

    Directory of Open Access Journals (Sweden)

    Huaiyu Li

    2015-12-01

    Full Text Available Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.

  3. The importance of the sampling frequency in determining short-time-averaged irradiance and illuminance for rapidly changing cloud cover

    International Nuclear Information System (INIS)

    Delaunay, J.J.; Rommel, M.; Geisler, J.

    1994-01-01

    The sampling interval is an important parameter which must be chosen carefully, if measurements of the direct, global, and diffuse irradiance or illuminance are carried out to determine their averages over a given period. Using measurements from a day with rapidly moving clouds, we investigated the influence of the sampling interval on the uncertainly of the calculated 15-min averages. We conclude, for this averaging period, that the sampling interval should not exceed 60 s and 10 s for measurement of the diffuse and global components respectively, to reduce the influence of the sampling interval below 2%. For the direct component, even a 5 s sampling interval is too long to reach this influence level for days with extremely quickly changing insolation conditions. (author)

  4. The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century.

    Science.gov (United States)

    Ochiai, Derek H; Adib, Farshad

    2015-04-01

    Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE.

  5. The Moving image

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    Every day we are presented with bodily expressions in audiovisual media – by anchors, journalists and characters in films for instance. This article explores how body language in the moving image has been and can be approached in a scholarly manner.......Every day we are presented with bodily expressions in audiovisual media – by anchors, journalists and characters in films for instance. This article explores how body language in the moving image has been and can be approached in a scholarly manner....

  6. Moving to Jobs?

    OpenAIRE

    Dave Maré; Jason Timmins

    2003-01-01

    This paper examines whether New Zealand residents move from low-growth to high-growth regions, using New Zealand census data from the past three inter-censal periods (covering 1986-2001). We focus on the relationship between employment growth and migration flows to gauge the strength of the relationship and the stability of the relationship over the business cycle. We find that people move to areas of high employment growth, but that the probability of leaving a region is less strongly relate...

  7. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  8. Isolating and moving single atoms using silicon nanocrystals

    Science.gov (United States)

    Carroll, Malcolm S.

    2010-09-07

    A method is disclosed for isolating single atoms of an atomic species of interest by locating the atoms within silicon nanocrystals. This can be done by implanting, on the average, a single atom of the atomic species of interest into each nanocrystal, and then measuring an electrical charge distribution on the nanocrystals with scanning capacitance microscopy (SCM) or electrostatic force microscopy (EFM) to identify and select those nanocrystals having exactly one atom of the atomic species of interest therein. The nanocrystals with the single atom of the atomic species of interest therein can be sorted and moved using an atomic force microscope (AFM) tip. The method is useful for forming nanoscale electronic and optical devices including quantum computers and single-photon light sources.

  9. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2014-01-01

    Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.

  10. Time series forecasting using ERNN and QR based on Bayesian model averaging

    Science.gov (United States)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  11. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  12. A comparative study on generating simulated Landsat NDVI images using data fusion and regression method-the case of the Korean Peninsula.

    Science.gov (United States)

    Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee

    2017-07-01

    Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.

  13. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-11-09

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  14. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-01-08

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  15. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong; Sundaramoorthi, Ganesh

    2017-01-01

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  16. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-05-14

    This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  17. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  18. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  19. Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories

    International Nuclear Information System (INIS)

    Vallisneri, Michele; Galley, Chad R

    2012-01-01

    for the isotropic population remains the same whether or not the LISA orbits are included in the computation. However, detector motion tightens the distribution of sensitivities, so for 50% of sources the sensitivity is within 30% of its average. For the Galactic-disk population, the average and the distribution of the sensitivity for a moving detector turn out to be similar to the isotropic case. (paper)

  20. Study of the average charge states of 188Pb and 252,254No ions at the gas-filled separator TASCA

    International Nuclear Information System (INIS)

    Khuyagbaatar, J.; Ackermann, D.; Andersson, L.-L.; Ballof, J.; Brüchle, W.; Düllmann, Ch.E.; Dvorak, J.; Eberhardt, K.; Even, J.; Gorshkov, A.; Graeger, R.; Heßberger, F.-P.; Hild, D.; Hoischen, R.; Jäger, E.; Kindler, B.

    2012-01-01

    The average charge states of 188 Pb and 252,254 No ions in dilute helium gas were measured at the gas-filled recoil separator TASCA. Hydrogen gas was also used as a filling gas for measurements of the average charge state of 254 No. Helium and hydrogen gases at pressures from 0.2 mbar to 2.0 mbar were used. A strong dependence of the average charge state on the pressure of the filling gases was observed for both, helium and hydrogen. The influence of this dependence, classically attributed to the so-called “density effect”, on the performance of TASCA was investigated. The average charge states of 254 No ions were also measured in mixtures of helium and hydrogen gases at low gas pressures around 1.0 mbar. From the experimental results simple expressions for the prediction of average charge states of heavy ions moving in rarefied helium gas, hydrogen gas, and in their mixture were derived.

  1. Error Analysis of Fast Moving Target Geo-location in Wide Area Surveillance Ground Moving Target Indication Mode

    Directory of Open Access Journals (Sweden)

    Zheng Shi-chao

    2013-12-01

    Full Text Available As an important mode in airborne radar systems, Wide Area Surveillance Ground Moving Target Indication (WAS-GMTI mode has the ability of monitoring a large area in a short time, and then the detected moving targets can be located quickly. However, in real environment, many factors introduce considerable errors into the location of moving targets. In this paper, a fast location method based on the characteristics of the moving targets in WAS-GMTI mode is utilized. And in order to improve the location performance, those factors that introduce location errors are analyzed and moving targets are relocated. Finally, the analysis of those factors is proved to be reasonable by simulation and real data experiments.

  2. Undergraduate grade point average is a poor predictor of scientific productivity later in career.

    Science.gov (United States)

    Polasek, Ozren; Mavrinac, Martina; Jović, Alan; Dzono Boban, Ankica; Biocina-Lukenda, Dolores; Glivetić, Tatjana; Vasilj, Ivan; Petrovecki, Mladen

    2010-03-01

    The aim of this study was to investigate the usefulness of the undergraduate grade point average in prediction of scientific production of research trainees during their fellowship and later in career. The study was performed in 1320 research trainees whose fellowships from the Croatian Ministry of Science, Education and Sports were terminated between 1999 and 2005. The data were analyzed using logistic regression. The results indicated that undergraduate grade point average was negatively associated with scientific productivity both during and after the fellowship termination. Other indicators, such as undergraduate scientific productivity exhibited much stronger positive association with scientific productivity later in career and should be given more weight in candidate selection process in science and research.

  3. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  4. 30 CFR 56.14107 - Moving machine parts.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Moving machine parts. 56.14107 Section 56.14107... Safety Devices and Maintenance Requirements § 56.14107 Moving machine parts. (a) Moving machine parts... takeup pulleys, flywheels, couplings, shafts, fan blades, and similar moving parts that can cause injury...

  5. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  6. An Optimization of Inventory Demand Forecasting in University Healthcare Centre

    Science.gov (United States)

    Bon, A. T.; Ng, T. K.

    2017-01-01

    Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.

  7. People on the Move

    Science.gov (United States)

    Mohan, Audrey

    2018-01-01

    The purpose of this 2-3 day lesson is to introduce students in Grades 2-4 to the idea that people move around the world for a variety of reasons. In this activity, students explore why people move through class discussion, a guided reading, and interviews. The teacher elicits student ideas using the compelling question (Dimension 1 of the C3…

  8. Price Sensitivity of Demand for Prescription Drugs: Exploiting a Regression Kink Design

    DEFF Research Database (Denmark)

    Simonsen, Marianne; Skipper, Lars; Skipper, Niels

    This paper investigates price sensitivity of demand for prescription drugs using drug purchase records for at 20% random sample of the Danish population. We identify price responsiveness by exploiting exogenous variation in prices caused by kinked reimbursement schemes and implement a regression ...... education and income are, however, more responsive to the price. Also, essential drugs that prevent deterioration in health and prolong life have lower associated average price sensitivity....... kink design. Thus, within a unifying framework we uncover price sensitivity for different subpopulations and types of drugs. The results suggest low average price responsiveness with corresponding price elasticities ranging from -0.08 to -0.25, implying that demand is inelastic. Individuals with lower...

  9. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  10. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning.

    Science.gov (United States)

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2015-12-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors

  11. Mortality at older ages and moves in residential and sheltered housing: evidence from the UK

    Science.gov (United States)

    Robards, James; Evandrou, Maria; Falkingham, Jane; Vlachantoni, Athina

    2014-01-01

    Background The study examines the relationship between transitions to residential and sheltered housing and mortality. Past research has focused on housing moves over extended time periods and subsequent mortality. In this paper, annual housing transitions allow the identification of the patterning of housing moves, the duration of stay in each sector and the assessment of the relationship of preceding moves to a heightened risk of dying. Methods The study uses longitudinal data constructed from pooled observations from the British Household Panel Survey (waves 1993–2008). Records were pooled for all cases where the survey member is 65 years or over and living in private housing at baseline and observed at three consecutive time points, including baseline (N=23 727). Binary logistic regression (death as outcome three waves after baseline) explored the relative strength of different housing transitions, controlling for sociodemographic predictors. Results (1) Transition to residential housing within the previous 12 months was associated with the highest mortality risk. (2) Results support existing findings showing an interaction between marital status and mortality, whereby unmarried persons were more likely to die. (3) Higher male mortality was observed across all housing transitions. Conclusions An older person's move to residential housing is associated with a higher risk of mortality within 12 months of the move. Survivors living in residential housing for more than a year, show a similar probability of dying to those living in sheltered housing. Results highlight that it is the type of accommodation that affects an older person's mortality risk, and the length of time they spend there. PMID:24638058

  12. P1-25: Filling-in the Blind Spot with the Average Direction

    Directory of Open Access Journals (Sweden)

    Sang-Ah Yoo

    2012-10-01

    Full Text Available Previous studies have shown that the visual system integrates local motions and perceives the average direction (Watamaniuk & Duchon, 1992 Vision Research 32 931–941. We investigated whether the surface of the blind spot is filled in with the average direction of the surrounding local motions. To test this, we varied the direction of a random-dot kinematogram (RDK both in adaptation and test. Motion aftereffects (MAE were defined as the difference of motion coherence thresholds between with and without adaptation. The participants were initially adapted to an annular RDK surrounding the blind spot for 30 s in their dominant eyes. The direction of each dot in this RDK was selected equally and randomly from either a normal distribution with the mean of 15° clockwise from vertical, 15° counterclockwise from vertical, or from the mixture of them. Immediately after the adaptation, a disk-shaped test RDK was presented for 1 s to the corresponding blind-spot location in the opposite eye. This RDK moved either 15° clockwise, 15° counterclockwise, or vertically (the average of the two directions. The participants' task was to discriminate the direction of the test RDK across different coherence levels. We found significant MAE when the test RDK had the same directions as the adaptor. More importantly, equally strong MAE was observed even when the direction of the test RDK was vertical, which was not physically present during adaptation. The result demonstrates that the visual system uses the average direction of the local surrounding motions to fill in the blind spot.

  13. Method and apparatus for ultrasonic characterization through the thickness direction of a moving web

    Science.gov (United States)

    Jackson, Theodore; Hall, Maclin S.

    2001-01-01

    A method and apparatus for determining the caliper and/or the ultrasonic transit time through the thickness direction of a moving web of material using ultrasonic pulses generated by a rotatable wheel ultrasound apparatus. The apparatus includes a first liquid-filled tire and either a second liquid-filled tire forming a nip or a rotatable cylinder that supports a thin moving web of material such as a moving web of paper and forms a nip with the first liquid-filled tire. The components of ultrasonic transit time through the tires and fluid held within the tires may be resolved and separately employed to determine the separate contributions of the two tire thicknesses and the two fluid paths to the total path length that lies between two ultrasonic transducer surfaces contained within the tires in support of caliper measurements. The present invention provides the benefit of obtaining a transit time and caliper measurement at any point in time as a specimen passes through the nip of rotating tires and eliminates inaccuracies arising from nonuniform tire circumferential thickness by accurately retaining point-to-point specimen transit time and caliper variation information, rather than an average obtained through one or more tire rotations. Morever, ultrasonic transit time through the thickness direction of a moving web may be determined independent of small variations in the wheel axle spacing, tire thickness, and liquid and tire temperatures.

  14. Moving and Being Moved: Implications for Practice.

    Science.gov (United States)

    Kretchmar, R. Scott

    2000-01-01

    Uses philosophical writings, a novel about baseball, and a nonfiction work on rowing to analyze levels of meaning in physical activity, showing why three popular methods for enhancing meaning have not succeeded and may have moved some students away from deeper levels of meaning. The paper suggests that using hints taken from the three books could…

  15. Modified Regression Rate Formula of PMMA Combustion by a Single Plane Impinging Jet

    Directory of Open Access Journals (Sweden)

    Tsuneyoshi Matsuoka

    2017-01-01

    Full Text Available A modified regression rate formula for the uppermost stage of CAMUI-type hybrid rocket motor is proposed in this study. Assuming a quasi-steady, one-dimensional, an energy balance against a control volume near the fuel surface is considered. Accordingly, the regression rate formula which can calculate the local regression rate by the quenching distance between the flame and the regression surface is derived. An experimental setup which simulates the combustion phenomenon involved in the uppermost stage of a CAMUI-type hybrid rocket motor was constructed and the burning tests with various flow velocities and impinging distances were performed. A PMMA slab of 20 mm height, 60 mm width, and 20 mm thickness was chosen as a sample specimen and pure oxygen and O2/N2 mixture (50/50 vol.% were employed as the oxidizers. The time-averaged regression rate along the fuel surface was measured by a laser displacement sensor. The quenching distance during the combustion event was also identified from the observation. The comparison between the purely experimental and calculated values showed good agreement, although a large systematic error was expected due to the difficulty in accurately identifying the quenching distance.

  16. Rearrangement moves on rooted phylogenetic networks.

    Science.gov (United States)

    Gambette, Philippe; van Iersel, Leo; Jones, Mark; Lafond, Manuel; Pardi, Fabio; Scornavacca, Celine

    2017-08-01

    Phylogenetic tree reconstruction is usually done by local search heuristics that explore the space of the possible tree topologies via simple rearrangements of their structure. Tree rearrangement heuristics have been used in combination with practically all optimization criteria in use, from maximum likelihood and parsimony to distance-based principles, and in a Bayesian context. Their basic components are rearrangement moves that specify all possible ways of generating alternative phylogenies from a given one, and whose fundamental property is to be able to transform, by repeated application, any phylogeny into any other phylogeny. Despite their long tradition in tree-based phylogenetics, very little research has gone into studying similar rearrangement operations for phylogenetic network-that is, phylogenies explicitly representing scenarios that include reticulate events such as hybridization, horizontal gene transfer, population admixture, and recombination. To fill this gap, we propose "horizontal" moves that ensure that every network of a certain complexity can be reached from any other network of the same complexity, and "vertical" moves that ensure reachability between networks of different complexities. When applied to phylogenetic trees, our horizontal moves-named rNNI and rSPR-reduce to the best-known moves on rooted phylogenetic trees, nearest-neighbor interchange and rooted subtree pruning and regrafting. Besides a number of reachability results-separating the contributions of horizontal and vertical moves-we prove that rNNI moves are local versions of rSPR moves, and provide bounds on the sizes of the rNNI neighborhoods. The paper focuses on the most biologically meaningful versions of phylogenetic networks, where edges are oriented and reticulation events clearly identified. Moreover, our rearrangement moves are robust to the fact that networks with higher complexity usually allow a better fit with the data. Our goal is to provide a solid basis for

  17. Rearrangement moves on rooted phylogenetic networks.

    Directory of Open Access Journals (Sweden)

    Philippe Gambette

    2017-08-01

    Full Text Available Phylogenetic tree reconstruction is usually done by local search heuristics that explore the space of the possible tree topologies via simple rearrangements of their structure. Tree rearrangement heuristics have been used in combination with practically all optimization criteria in use, from maximum likelihood and parsimony to distance-based principles, and in a Bayesian context. Their basic components are rearrangement moves that specify all possible ways of generating alternative phylogenies from a given one, and whose fundamental property is to be able to transform, by repeated application, any phylogeny into any other phylogeny. Despite their long tradition in tree-based phylogenetics, very little research has gone into studying similar rearrangement operations for phylogenetic network-that is, phylogenies explicitly representing scenarios that include reticulate events such as hybridization, horizontal gene transfer, population admixture, and recombination. To fill this gap, we propose "horizontal" moves that ensure that every network of a certain complexity can be reached from any other network of the same complexity, and "vertical" moves that ensure reachability between networks of different complexities. When applied to phylogenetic trees, our horizontal moves-named rNNI and rSPR-reduce to the best-known moves on rooted phylogenetic trees, nearest-neighbor interchange and rooted subtree pruning and regrafting. Besides a number of reachability results-separating the contributions of horizontal and vertical moves-we prove that rNNI moves are local versions of rSPR moves, and provide bounds on the sizes of the rNNI neighborhoods. The paper focuses on the most biologically meaningful versions of phylogenetic networks, where edges are oriented and reticulation events clearly identified. Moreover, our rearrangement moves are robust to the fact that networks with higher complexity usually allow a better fit with the data. Our goal is to provide

  18. Reasons for Moving in Nonmetro Iowa

    OpenAIRE

    Burke, Sandra Charvat; Edelman, Mark

    2007-01-01

    This study highlights the experiences of people who have recently moved to or from 19 selected nonmetropolitan counties of Iowa. This report highlights work, family, community, and housing reasons for moving. The purpose is to increase understanding about why people move so community leaders and citizens can develop actionable strategies for attracting and retaining population.

  19. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  20. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  1. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  2. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  3. Slow light in moving media

    Science.gov (United States)

    Leonhardt, U.; Piwnicki, P.

    2001-06-01

    We review the theory of light propagation in moving media with extremely low group velocity. We intend to clarify the most elementary features of monochromatic slow light in a moving medium and, whenever possible, to give an instructive simplified picture.

  4. Nitrification in moving bed and fixed bed biofilters treating effluent water from a large commercial outdoor rainbow trout RAS

    DEFF Research Database (Denmark)

    Suhr, Karin; Pedersen, Per Bovbjerg

    2010-01-01

    The nitrification performance of two fixed bed (FB) biofilters and two moving bed (MB) biofilters was evaluated. They received the same cold (8 degrees C) influent water from a commercial outdoor RAS facility producing rainbow trout (average density 32 kg m(-3)). The filters were constructed as f...

  5. Nordic Seniors on the Move

    DEFF Research Database (Denmark)

    ”I believe that all people need to move about. Actually, some have difficulties in doing so. They stay in their home neighbourhoods where they’ve grown up and feel safe. I can understand that, but my wife and I, we didn’t want that. We are more open to new ideas.” This anthology is about seniors...... on the move. In seven chapters, Nordic researchers from various disciplines, by means of ethnographic methods, attempt to comprehend the phenomenon of Nordic seniors who move to leisure areas in their own or in other countries. The number of people involved in this kind of migratory movement has grown...... above gives voice to one of these seniors, stressing the necessity of moving. The anthology contributes to the international body of literature about later life migration, specifically representing experiences made by Nordic seniors. As shown here, mobility and migration in later life have implications...

  6. Developing and testing a global-scale regression model to quantify mean annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.

    2017-01-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.

  7. Slow collisions between identical atoms in a laser field: Application of the Born and Markov approximations to the system of moving atoms

    International Nuclear Information System (INIS)

    Trippenbach, M.; Gao, B.; Cooper, J.; Burnett, K.

    1992-01-01

    We have derived reduced-density-matrix equations of motion for a pair of two identical atoms moving in the radiation field as the first step in establishing a theory of collisional redistribution of light from neutral-atom traps. We use the Zwanzig projection-operator technique to average over spontaneous field modes and establish the conditions under which Born and Markov approximations can be applied to the system of moving atoms. It follows from these considerations that when these conditions hold, the reduced-density-matrix equation for moving atoms has the same form as that for the stationary case: time dependence is introduced into the decay rates and interaction potentials by making the substitution R=R(t)

  8. Data-Driven Jump Detection Thresholds for Application in Jump Regressions

    Directory of Open Access Journals (Sweden)

    Robert Davies

    2018-03-01

    Full Text Available This paper develops a method to select the threshold in threshold-based jump detection methods. The method is motivated by an analysis of threshold-based jump detection methods in the context of jump-diffusion models. We show that over the range of sampling frequencies a researcher is most likely to encounter that the usual in-fill asymptotics provide a poor guide for selecting the jump threshold. Because of this we develop a sample-based method. Our method estimates the number of jumps over a grid of thresholds and selects the optimal threshold at what we term the ‘take-off’ point in the estimated number of jumps. We show that this method consistently estimates the jumps and their indices as the sampling interval goes to zero. In several Monte Carlo studies we evaluate the performance of our method based on its ability to accurately locate jumps and its ability to distinguish between true jumps and large diffusive moves. In one of these Monte Carlo studies we evaluate the performance of our method in a jump regression context. Finally, we apply our method in two empirical studies. In one we estimate the number of jumps and report the jump threshold our method selects for three commonly used market indices. In the other empirical application we perform a series of jump regressions using our method to select the jump threshold.

  9. A moving experience !

    CERN Document Server

    2005-01-01

    The Transport Service pulled out all the stops and, more specifically, its fleet of moving and lifting equipment for the Discovery Monday on 6 June - a truly moving experience for all the visitors who took part ! Visitors could play at being machine operator, twiddling the controls of a lift truck fitted with a jib to lift a dummy magnet into a wooden mock-up of a beam-line.They had to show even greater dexterity for this game of lucky dip...CERN-style.Those with a head for heights took to the skies 20 m above ground in a telescopic boom lift.Children were allowed to climb up into the operator's cabin - this is one of the cranes used to move the LHC magnets around. Warm thanks to all members of the Transport Service for their participation, especially B. Goicoechea, T. Ilkei, R. Bihery, S. Prodon, S. Pelletier, Y. Bernard, A.  Sallot, B. Pigeard, S. Guinchard, B. Bulot, J. Berrez, Y. Grandjean, A. Bouakkaz, M. Bois, F. Stach, T. Mazzarino and S. Fumey.

  10. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    Science.gov (United States)

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  11. Differential Effects for Sexual Risk Behavior: An Application of Finite Mixture Regression

    OpenAIRE

    Lanza, Stephanie T.; Kugler, Kari C.; Mathur, Charu

    2011-01-01

    Understanding the multiple factors that place individuals at risk for sexual risk behavior is critical for developing effective intervention programs. Regression-based methods are commonly used to estimate the average effects of risk factors, however such results can be difficult to translate to prevention implications at the individual level. Although differential effects can be examined to some extent by including interaction terms, as risk factors and moderators are added to the model inte...

  12. Prospective in-patient cohort study of moves between levels of therapeutic security: the DUNDRUM-1 triage security, DUNDRUM-3 programme completion and DUNDRUM-4 recovery scales and the HCR-20

    Directory of Open Access Journals (Sweden)

    Davoren Mary

    2012-07-01

    Full Text Available Abstract Background We examined whether new structured professional judgment instruments for assessing need for therapeutic security, treatment completion and recovery in forensic settings were related to moves from higher to lower levels of therapeutic security and added anything to assessment of risk. Methods This was a prospective naturalistic twelve month observational study of a cohort of patients in a forensic hospital placed according to their need for therapeutic security along a pathway of moves from high to progressively less secure units in preparation for discharge. Patients were assessed using the DUNDRUM-1 triage security scale, the DUNDRUM-3 programme completion scale and the DUNDRUM-4 recovery scale and assessments of risk of violence, self harm and suicide, symptom severity and global function. Patients were subsequently observed for positive moves to less secure units and negative moves to more secure units. Results There were 86 male patients at baseline with mean follow-up 0.9 years, 11 positive and 9 negative moves. For positive moves, logistic regression indicated that along with location at baseline, the DUNDRUM-1, HCR-20 dynamic and PANSS general symptom scores were associated with subsequent positive moves. The receiver operating characteristic was significant for the DUNDRUM-1 while ANOVA co-varying for both location at baseline and HCR-20 dynamic score was significant for DUNDRUM-1. For negative moves, logistic regression showed DUNDRUM-1 and HCR-20 dynamic scores were associated with subsequent negative moves, along with DUNDRUM-3 and PANSS negative symptoms in some models. The receiver operating characteristic was significant for the DUNDRUM-4 recovery and HCR-20 dynamic scores with DUNDRUM-1, DUNDRUM-3, PANSS general and GAF marginal. ANOVA co-varying for both location at baseline and HCR-20 dynamic scores showed only DUNDRUM-1 and PANSS negative symptoms associated with subsequent negative moves. Conclusions

  13. Prospective in-patient cohort study of moves between levels of therapeutic security: the DUNDRUM-1 triage security, DUNDRUM-3 programme completion and DUNDRUM-4 recovery scales and the HCR-20.

    Science.gov (United States)

    Davoren, Mary; O'Dwyer, Sarah; Abidin, Zareena; Naughton, Leena; Gibbons, Olivia; Doyle, Elaine; McDonnell, Kim; Monks, Stephen; Kennedy, Harry G

    2012-07-13

    We examined whether new structured professional judgment instruments for assessing need for therapeutic security, treatment completion and recovery in forensic settings were related to moves from higher to lower levels of therapeutic security and added anything to assessment of risk. This was a prospective naturalistic twelve month observational study of a cohort of patients in a forensic hospital placed according to their need for therapeutic security along a pathway of moves from high to progressively less secure units in preparation for discharge. Patients were assessed using the DUNDRUM-1 triage security scale, the DUNDRUM-3 programme completion scale and the DUNDRUM-4 recovery scale and assessments of risk of violence, self harm and suicide, symptom severity and global function. Patients were subsequently observed for positive moves to less secure units and negative moves to more secure units. There were 86 male patients at baseline with mean follow-up 0.9 years, 11 positive and 9 negative moves. For positive moves, logistic regression indicated that along with location at baseline, the DUNDRUM-1, HCR-20 dynamic and PANSS general symptom scores were associated with subsequent positive moves. The receiver operating characteristic was significant for the DUNDRUM-1 while ANOVA co-varying for both location at baseline and HCR-20 dynamic score was significant for DUNDRUM-1. For negative moves, logistic regression showed DUNDRUM-1 and HCR-20 dynamic scores were associated with subsequent negative moves, along with DUNDRUM-3 and PANSS negative symptoms in some models. The receiver operating characteristic was significant for the DUNDRUM-4 recovery and HCR-20 dynamic scores with DUNDRUM-1, DUNDRUM-3, PANSS general and GAF marginal. ANOVA co-varying for both location at baseline and HCR-20 dynamic scores showed only DUNDRUM-1 and PANSS negative symptoms associated with subsequent negative moves. Clinicians appear to decide moves based on combinations of current and

  14. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis.

    Science.gov (United States)

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.

  15. Estimation of direction of arrival of a moving target using subspace based approaches

    Science.gov (United States)

    Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2016-05-01

    In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.

  16. Move of ground water

    International Nuclear Information System (INIS)

    Kimura, Shigehiko

    1983-01-01

    As a ground water flow which is difficult to explain by Darcy's theory, there is stagnant water in strata, which moves by pumping and leads to land subsidence. This is now a major problem in Japan. Such move on an extensive scale has been investigated in detail by means of 3 H such as from rainfall in addition to ordinary measurement. The move of ground water is divided broadly into that in an unsaturated stratum from ground surface to water-table and that in a saturated stratum below the water-table. The course of the analyses made so far by 3 H contained in water, and the future trend of its usage are described. A flow model of regarding water as plastic fluid and its flow as channel assembly may be available for some flow mechanism which is not possible to explain with Darcy's theory. (Mori, K.)

  17. Improved model of the retardance in citric acid coated ferrofluids using stepwise regression

    Science.gov (United States)

    Lin, J. F.; Qiu, X. R.

    2017-06-01

    Citric acid (CA) coated Fe3O4 ferrofluids (FFs) have been conducted for biomedical application. The magneto-optical retardance of CA coated FFs was measured by a Stokes polarimeter. Optimization and multiple regression of retardance in FFs were executed by Taguchi method and Microsoft Excel previously, and the F value of regression model was large enough. However, the model executed by Excel was not systematic. Instead we adopted the stepwise regression to model the retardance of CA coated FFs. From the results of stepwise regression by MATLAB, the developed model had highly predictable ability owing to F of 2.55897e+7 and correlation coefficient of one. The average absolute error of predicted retardances to measured retardances was just 0.0044%. Using the genetic algorithm (GA) in MATLAB, the optimized parametric combination was determined as [4.709 0.12 39.998 70.006] corresponding to the pH of suspension, molar ratio of CA to Fe3O4, CA volume, and coating temperature. The maximum retardance was found as 31.712°, close to that obtained by evolutionary solver in Excel and a relative error of -0.013%. Above all, the stepwise regression method was successfully used to model the retardance of CA coated FFs, and the maximum global retardance was determined by the use of GA.

  18. Student-Centered Coaching: The Moves

    Science.gov (United States)

    Sweeney, Diane; Harris, Leanna S.

    2017-01-01

    Student-centered coaching is a highly-effective, evidence-based coaching model that shifts the focus from "fixing" teachers to collaborating with them to design instruction that targets student outcomes. But what does this look like in practice? "Student-Centered Coaching: The Moves" shows you the day-to-day coaching moves that…

  19. Mixed convection from a discrete heat source in enclosures with two adjacent moving walls and filled with micropolar nanofluids

    Directory of Open Access Journals (Sweden)

    Sameh E. Ahmed

    2016-03-01

    Full Text Available This paper examines numerically the thermal and flow field characteristics of the laminar steady mixed convection flow in a square lid-driven enclosure filled with water-based micropolar nanofluids by using the finite volume method. While a uniform heat source is located on a part of the bottom of the enclosure, both the right and left sidewalls are considered adiabatic together with the remaining parts of the bottom wall. The upper wall is maintained at a relatively low temperature. Both the upper and left sidewalls move at a uniform lid-driven velocity and four different cases of the moving lid ordinations are considered. The fluid inside the enclosure is a water based micropolar nanofluid containing different types of solid spherical nanoparticles: Cu, Ag, Al2O3, and TiO2. Based on the numerical results, the effects of the dominant parameters such as Richardson number, nanofluid type, length and location of the heat source, solid volume fractions, moving lid orientations and dimensionless viscosity are examined. Comparisons with previously numerical works are performed and good agreements between the results are observed. It is found that the average Nusselt number along the heat source decreases as the heat source length increases while it increases when the solid volume fraction increases. Also, the results of the present study indicate that both the local and the average Nusselt numbers along the heat source have the highest value for the fourth case (C4. Moreover, it is observed that both the Richardson number and moving lid ordinations have a significant effect on the flow and thermal fields in the enclosure.

  20. Adjuvant corneal crosslinking to prevent hyperopic LASIK regression.

    Science.gov (United States)

    Aslanides, Ioannis M; Mukherjee, Achyut N

    2013-01-01

    To report the long term outcomes, safety, stability, and efficacy in a pilot series of simultaneous hyperopic laser assisted in situ keratomileusis (LASIK) and corneal crosslinking (CXL). A small cohort series of five eyes, with clinically suboptimal topography and/or thickness, underwent LASIK surgery with immediate riboflavin application under the flap, followed by UV light irradiation. Postoperative assessment was performed at 1, 3, 6, and 12 months, with late follow up at 4 years, and results were compared with a matched cohort that received LASIK only. The average age of the LASIK-CXL group was 39 years (26-46), and the average spherical equivalent hyperopic refractive error was +3.45 diopters (standard deviation 0.76; range 2.5 to 4.5). All eyes maintained refractive stability over the 4 years. There were no complications related to CXL, and topographic and clinical outcomes were as expected for standard LASIK. This limited series suggests that simultaneous LASIK and CXL for hyperopia is safe. Outcomes of the small cohort suggest that this technique may be promising for ameliorating hyperopic regression, presumed to be biomechanical in origin, and may also address ectasia risk.

  1. Moving Target Photometry Using WISE and NEOWISE

    Science.gov (United States)

    Wright, Edward L.

    2015-01-01

    WISE band 1 observations have a significant noise contribution from confusion. The image subtraction done on W0855-0714 by Wright et al. (2014) shows that this noise source can be eliminated for sources that move by much more than the beamsize. This paper describes an analysis that includes a pattern of celestially fixed flux plus a source moving with a known trajectory. This technique allows the confusion noise to be modeled with nuisance parameters and removed even for sources that have not moved by many beamwidths. However, the detector noise is magnified if the motion is too small. Examples of the method applied to fast moving Y dwarfs and slow moving planets will be shown.

  2. Concentration Sensing by the Moving Nucleus in Cell Fate Determination: A Computational Analysis.

    Directory of Open Access Journals (Sweden)

    Varun Aggarwal

    Full Text Available During development of the vertebrate neuroepithelium, the nucleus in neural progenitor cells (NPCs moves from the apex toward the base and returns to the apex (called interkinetic nuclear migration at which point the cell divides. The fate of the resulting daughter cells is thought to depend on the sampling by the moving nucleus of a spatial concentration profile of the cytoplasmic Notch intracellular domain (NICD. However, the nucleus executes complex stochastic motions including random waiting and back and forth motions, which can expose the nucleus to randomly varying levels of cytoplasmic NICD. How nuclear position can determine daughter cell fate despite the stochastic nature of nuclear migration is not clear. Here we derived a mathematical model for reaction, diffusion, and nuclear accumulation of NICD in NPCs during interkinetic nuclear migration (INM. Using experimentally measured trajectory-dependent probabilities of nuclear turning, nuclear waiting times and average nuclear speeds in NPCs in the developing zebrafish retina, we performed stochastic simulations to compute the nuclear trajectory-dependent probabilities of NPC differentiation. Comparison with experimentally measured nuclear NICD concentrations and trajectory-dependent probabilities of differentiation allowed estimation of the NICD cytoplasmic gradient. Spatially polarized production of NICD, rapid NICD cytoplasmic consumption and the time-averaging effect of nuclear import/export kinetics are sufficient to explain the experimentally observed differentiation probabilities. Our computational studies lend quantitative support to the feasibility of the nuclear concentration-sensing mechanism for NPC fate determination in zebrafish retina.

  3. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  4. Logic regression and its extensions.

    Science.gov (United States)

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. On the energy flux of a signal in a moving magnetized plasma

    International Nuclear Information System (INIS)

    Gavrilenko, V.G.; Zelekson, L.A.

    1980-01-01

    Energy exchange of an electromagnetic signal with a homogeneous plasma moving along a strong magnetic field, provided that the initial signal is given in a plane parallel or normal to the drift velocity, has been analyzed. In the first case expressions for the fields excited in the long-range zone are obtained by the stationary phase method. It follows from the expressions that starting from some moment of time the direction of the energy flux and the sign of the energy density change into opposite. This is caused by the fact that the fast harmonic components (with a phase velocity exceeding the drift velocity) of the initial signal reach first the point of observation, and then the slow ones do, the energy density of the show waves being negative. On longitudinal propagation of perturbations excited by a quasimonochromatic source, the averaged flux and energy density in the weakly relativistic approximation have been shown to be zero. In conclusion electromagnetic waves moving with a superlight velocity in a non-dispersive medium are studied, the energy of the waves changing the sign with time [ru

  6. Tumor regression patterns in retinoblastoma

    International Nuclear Information System (INIS)

    Zafar, S.N.; Siddique, S.N.; Zaheer, N.

    2016-01-01

    To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

  7. Trading Fees and Slow-Moving Capital

    OpenAIRE

    Buss, Adrian; Dumas, Bernard J

    2015-01-01

    In some situations, investment capital seems to move slowly towards profitable trades. We develop a model of a financial market in which capital moves slowly simply because there is a proportional cost to moving capital. We incorporate trading fees in an infinite-horizon dynamic general-equilibrium model in which investors optimally and endogenously decide when and how much to trade. We determine the steady-state equilibrium no-trade zone, study the dynamics of equilibrium trades and prices a...

  8. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  9. 30 CFR 57.14107 - Moving machine parts.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Moving machine parts. 57.14107 Section 57.14107... Equipment Safety Devices and Maintenance Requirements § 57.14107 Moving machine parts. (a) Moving machine parts shall be guarded to protect persons from contacting gears, sprockets, chains, drive, head, tail...

  10. DRREP: deep ridge regressed epitope predictor.

    Science.gov (United States)

    Sher, Gene; Zhi, Degui; Zhang, Shaojie

    2017-10-03

    The ability to predict epitopes plays an enormous role in vaccine development in terms of our ability to zero in on where to do a more thorough in-vivo analysis of the protein in question. Though for the past decade there have been numerous advancements and improvements in epitope prediction, on average the best benchmark prediction accuracies are still only around 60%. New machine learning algorithms have arisen within the domain of deep learning, text mining, and convolutional networks. This paper presents a novel analytically trained and string kernel using deep neural network, which is tailored for continuous epitope prediction, called: Deep Ridge Regressed Epitope Predictor (DRREP). DRREP was tested on long protein sequences from the following datasets: SARS, Pellequer, HIV, AntiJen, and SEQ194. DRREP was compared to numerous state of the art epitope predictors, including the most recently published predictors called LBtope and DMNLBE. Using area under ROC curve (AUC), DRREP achieved a performance improvement over the best performing predictors on SARS (13.7%), HIV (8.9%), Pellequer (1.5%), and SEQ194 (3.1%), with its performance being matched only on the AntiJen dataset, by the LBtope predictor, where both DRREP and LBtope achieved an AUC of 0.702. DRREP is an analytically trained deep neural network, thus capable of learning in a single step through regression. By combining the features of deep learning, string kernels, and convolutional networks, the system is able to perform residue-by-residue prediction of continues epitopes with higher accuracy than the current state of the art predictors.

  11. Lattice Boltzmann methods for moving boundary flows

    International Nuclear Information System (INIS)

    Inamuro, Takaji

    2012-01-01

    The lattice Boltzmann methods (LBMs) for moving boundary flows are presented. The LBM for two-phase fluid flows with the same density and the LBM combined with the immersed boundary method are described. In addition, the LBM on a moving multi-block grid is explained. Three numerical examples (a droplet moving in a constricted tube, the lift generation of a flapping wing and the sedimentation of an elliptical cylinder) are shown in order to demonstrate the applicability of the LBMs to moving boundary problems. (invited review)

  12. Lattice Boltzmann methods for moving boundary flows

    Energy Technology Data Exchange (ETDEWEB)

    Inamuro, Takaji, E-mail: inamuro@kuaero.kyoto-u.ac.jp [Department of Aeronautics and Astronautics, and Advanced Research Institute of Fluid Science and Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501 (Japan)

    2012-04-01

    The lattice Boltzmann methods (LBMs) for moving boundary flows are presented. The LBM for two-phase fluid flows with the same density and the LBM combined with the immersed boundary method are described. In addition, the LBM on a moving multi-block grid is explained. Three numerical examples (a droplet moving in a constricted tube, the lift generation of a flapping wing and the sedimentation of an elliptical cylinder) are shown in order to demonstrate the applicability of the LBMs to moving boundary problems. (invited review)

  13. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

  14. MODELLING THE INTERACTION IN GAME SPORTS - RELATIVE PHASE AND MOVING CORRELATIONS

    Directory of Open Access Journals (Sweden)

    Martin Lames

    2006-12-01

    Full Text Available Model building in game sports should maintain the constitutive feature of this group of sports, the dynamic interaction process between the two parties. For single net/wall games relative phase is suggested to describe the positional interaction between the two players. 30 baseline rallies in tennis were examined and relative phase was calculated by Hilbert transform from the two time-series of lateral displacement and trajectory in the court respectively. Results showed that relative phase indicates some aspects of the tactical interaction in tennis. At a more abstract level the interaction between two teams in handball was studied by examining the relationship of the two scoring processes. Each process can be conceived as a random walk. Moving averages of the scoring probabilities indicate something like a momentary strength. A moving correlation (length = 20 ball possessions describes the momentary relationship between the teams' strength. Evidence was found that this correlation is heavily time-dependent, in almost every single game among the 40 examined ones we found phases with a significant positive as well as significant negative relationship. This underlines the importance of a dynamic view on the interaction in these games.

  15. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  16. Oil and gas pipeline construction cost analysis and developing regression models for cost estimation

    Science.gov (United States)

    Thaduri, Ravi Kiran

    In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.

  17. 77 FR 16566 - Submission for OMB Review, Comment Request, Proposed Collection: Let's Move Museums, Let's Move...

    Science.gov (United States)

    2012-03-21

    ..., Proposed Collection: Let's Move Museums, Let's Move Gardens AGENCY: Institute of Museum and Library..., comment request. SUMMARY: The Institute of Museum and Library Services announces that the following... functions of the agency, including whether the information will have practical utility; Evaluate the...

  18. Estimation of Natural Frequencies During Earthquakes

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Rytter, A

    1997-01-01

    This paper presents two different recursive prediction error method (RPEM} implementations of multivariate Auto-Regressive Moving- Average (ARMAV) models for identification of a time variant civil engineering structure subject to an earthquake. The two techniques are tested on measurements made...

  19. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  20. Regression in autistic spectrum disorders.

    Science.gov (United States)

    Stefanatos, Gerry A

    2008-12-01

    A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

  1. Understanding logistic regression analysis

    OpenAIRE

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

  2. Regression-based model of skin diffuse reflectance for skin color analysis

    Science.gov (United States)

    Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi

    2008-11-01

    A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.

  3. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  4. FOCUS FORECASTING IN SUPPLY CHAIN: THE CASE STUDY OF FAST MOVING CONSUMER GOODS COMPANY IN SERBIA

    Directory of Open Access Journals (Sweden)

    Zoran Rakićević

    2015-04-01

    Full Text Available This paper presents an application of focus forecasting in a fast moving consumer goods (FMCG supply chain. Focus forecasting is tested in a real business case in a Serbian enterprise. The data used in the simulation refers to the historical sales of two types of FMCG with several different products. The data were collected and summarized across the whole distribution channel in the Serbian market from January 2012 to December 2013. We applied several well-known time series forecasting models using the focus forecasting approach, where for the future time period we used the method which had the best performances in the past. The focus forecasting approach mixes different standard forecasting methods on the data sets in order to find the one that was the most accurate during the past period. The accuracy of forecasting methods is defined through different measures of errors. In this paper we implemented the following forecasting models in Microsoft Excel: last period, all average, moving average, exponential smoothing with constant and variable parameter α, exponential smoothing with trend, exponential smoothing with trend and seasonality. The main purpose was not to evaluate different forecasting methods but to show a practical application of the focus forecasting approach in a real business case.

  5. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  6. One-dimensional Fermi accelerator model with moving wall described by a nonlinear van der Pol oscillator.

    Science.gov (United States)

    Botari, Tiago; Leonel, Edson D

    2013-01-01

    A modification of the one-dimensional Fermi accelerator model is considered in this work. The dynamics of a classical particle of mass m, confined to bounce elastically between two rigid walls where one is described by a nonlinear van der Pol type oscillator while the other one is fixed, working as a reinjection mechanism of the particle for a next collision, is carefully made by the use of a two-dimensional nonlinear mapping. Two cases are considered: (i) the situation where the particle has mass negligible as compared to the mass of the moving wall and does not affect the motion of it; and (ii) the case where collisions of the particle do affect the movement of the moving wall. For case (i) the phase space is of mixed type leading us to observe a scaling of the average velocity as a function of the parameter (χ) controlling the nonlinearity of the moving wall. For large χ, a diffusion on the velocity is observed leading to the conclusion that Fermi acceleration is taking place. On the other hand, for case (ii), the motion of the moving wall is affected by collisions with the particle. However, due to the properties of the van der Pol oscillator, the moving wall relaxes again to a limit cycle. Such kind of motion absorbs part of the energy of the particle leading to a suppression of the unlimited energy gain as observed in case (i). The phase space shows a set of attractors of different periods whose basin of attraction has a complicated organization.

  7. Engineering Women’s Attitudes and Goals in Choosing Disciplines with above and Below Average Female Representation

    Directory of Open Access Journals (Sweden)

    Dina Verdín

    2018-03-01

    Full Text Available Women’s participation in engineering remains well below that of men at all degree levels. However, despite the low enrollment of women in engineering as a whole, some engineering disciplines report above average female enrollment. We used multiple linear regression to examine the attitudes, beliefs, career outcome expectations, and career choice of first-year female engineering students enrolled in below average, average, and above average female representation disciplines in engineering. Our work begins to understand how the socially constructed masculine cultural norms of engineering may attract women differentially into specific engineering disciplines. This study used future time perspective, psychological personality traits, grit, various measures of STEM identities, and items related to career outcome expectations as theoretical frameworks. The results of this study indicate that women who are interested in engineering disciplines with different representations of women (i.e., more or less male-dominated have significantly different attitudes and beliefs, career goals, and career plans. This study provides information about the perceptions that women may have and attitudes that they bring with them into particular engineering pathways.

  8. A Matlab program for stepwise regression

    Directory of Open Access Journals (Sweden)

    Yanhong Qi

    2016-03-01

    Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

  9. Moving carbonation fronts in concrete: a moving-sharp-interface approach

    NARCIS (Netherlands)

    Muntean, A.; Böhm, M.; Kropp, J.

    2011-01-01

    We present a new modeling strategy for predicting the penetration of carbonation reaction fronts in concrete. The approach relies on the assumption that carbonation reaction concentrates macroscopically on an a priori unknown narrow strip (called reaction front) moving into concrete gradually

  10. Zero inflated Poisson and negative binomial regression models: application in education.

    Science.gov (United States)

    Salehi, Masoud; Roudbari, Masoud

    2015-01-01

    The number of failed courses and semesters in students are indicators of their performance. These amounts have zero inflated (ZI) distributions. Using ZI Poisson and negative binomial distributions we can model these count data to find the associated factors and estimate the parameters. This study aims at to investigate the important factors related to the educational performance of students. This cross-sectional study performed in 2008-2009 at Iran University of Medical Sciences (IUMS) with a population of almost 6000 students, 670 students selected using stratified random sampling. The educational and demographical data were collected using the University records. The study design was approved at IUMS and the students' data kept confidential. The descriptive statistics and ZI Poisson and negative binomial regressions were used to analyze the data. The data were analyzed using STATA. In the number of failed semesters, Poisson and negative binomial distributions with ZI, students' total average and quota system had the most roles. For the number of failed courses, total average, and being in undergraduate or master levels had the most effect in both models. In all models the total average have the most effect on the number of failed courses or semesters. The next important factor is quota system in failed semester and undergraduate and master levels in failed courses. Therefore, average has an important inverse effect on the numbers of failed courses and semester.

  11. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  12. Time-localized wavelet multiple regression and correlation

    Science.gov (United States)

    Fernández-Macho, Javier

    2018-02-01

    This paper extends wavelet methodology to handle comovement dynamics of multivariate time series via moving weighted regression on wavelet coefficients. The concept of wavelet local multiple correlation is used to produce one single set of multiscale correlations along time, in contrast with the large number of wavelet correlation maps that need to be compared when using standard pairwise wavelet correlations with rolling windows. Also, the spectral properties of weight functions are investigated and it is argued that some common time windows, such as the usual rectangular rolling window, are not satisfactory on these grounds. The method is illustrated with a multiscale analysis of the comovements of Eurozone stock markets during this century. It is shown how the evolution of the correlation structure in these markets has been far from homogeneous both along time and across timescales featuring an acute divide across timescales at about the quarterly scale. At longer scales, evidence from the long-term correlation structure can be interpreted as stable perfect integration among Euro stock markets. On the other hand, at intramonth and intraweek scales, the short-term correlation structure has been clearly evolving along time, experiencing a sharp increase during financial crises which may be interpreted as evidence of financial 'contagion'.

  13. Ultrasound image based visual servoing for moving target ablation by high intensity focused ultrasound.

    Science.gov (United States)

    Seo, Joonho; Koizumi, Norihiro; Mitsuishi, Mamoru; Sugita, Naohiko

    2017-12-01

    Although high intensity focused ultrasound (HIFU) is a promising technology for tumor treatment, a moving abdominal target is still a challenge in current HIFU systems. In particular, respiratory-induced organ motion can reduce the treatment efficiency and negatively influence the treatment result. In this research, we present: (1) a methodology for integration of ultrasound (US) image based visual servoing in a HIFU system; and (2) the experimental results obtained using the developed system. In the visual servoing system, target motion is monitored by biplane US imaging and tracked in real time (40 Hz) by registration with a preoperative 3D model. The distance between the target and the current HIFU focal position is calculated in every US frame and a three-axis robot physically compensates for differences. Because simultaneous HIFU irradiation disturbs US target imaging, a sophisticated interlacing strategy was constructed. In the experiments, respiratory-induced organ motion was simulated in a water tank with a linear actuator and kidney-shaped phantom model. Motion compensation with HIFU irradiation was applied to the moving phantom model. Based on the experimental results, visual servoing exhibited a motion compensation accuracy of 1.7 mm (RMS) on average. Moreover, the integrated system could make a spherical HIFU-ablated lesion in the desired position of the respiratory-moving phantom model. We have demonstrated the feasibility of our US image based visual servoing technique in a HIFU system for moving target treatment. © 2016 The Authors The International Journal of Medical Robotics and Computer Assisted Surgery Published by John Wiley & Sons Ltd.

  14. Understanding the Gap between Cognitive Abilities and Daily Living Skills in Adolescents with Autism Spectrum Disorders with Average Intelligence

    Science.gov (United States)

    Duncan, Amie W.; Bishop, Somer L.

    2015-01-01

    Daily living skills standard scores on the Vineland Adaptive Behavior Scales-2nd edition were examined in 417 adolescents from the Simons Simplex Collection. All participants had at least average intelligence and a diagnosis of autism spectrum disorder. Descriptive statistics and binary logistic regressions were used to examine the prevalence and…

  15. Multivariate Frequency-Severity Regression Models in Insurance

    Directory of Open Access Journals (Sweden)

    Edward W. Frees

    2016-02-01

    Full Text Available In insurance and related industries including healthcare, it is common to have several outcome measures that the analyst wishes to understand using explanatory variables. For example, in automobile insurance, an accident may result in payments for damage to one’s own vehicle, damage to another party’s vehicle, or personal injury. It is also common to be interested in the frequency of accidents in addition to the severity of the claim amounts. This paper synthesizes and extends the literature on multivariate frequency-severity regression modeling with a focus on insurance industry applications. Regression models for understanding the distribution of each outcome continue to be developed yet there now exists a solid body of literature for the marginal outcomes. This paper contributes to this body of literature by focusing on the use of a copula for modeling the dependence among these outcomes; a major advantage of this tool is that it preserves the body of work established for marginal models. We illustrate this approach using data from the Wisconsin Local Government Property Insurance Fund. This fund offers insurance protection for (i property; (ii motor vehicle; and (iii contractors’ equipment claims. In addition to several claim types and frequency-severity components, outcomes can be further categorized by time and space, requiring complex dependency modeling. We find significant dependencies for these data; specifically, we find that dependencies among lines are stronger than the dependencies between the frequency and average severity within each line.

  16. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  17. The cost determinants of routine infant immunization services: a meta-regression analysis of six country studies.

    Science.gov (United States)

    Menzies, Nicolas A; Suharlim, Christian; Geng, Fangli; Ward, Zachary J; Brenzel, Logan; Resch, Stephen C

    2017-10-06

    Evidence on immunization costs is a critical input for cost-effectiveness analysis and budgeting, and can describe variation in site-level efficiency. The Expanded Program on Immunization Costing and Financing (EPIC) Project represents the largest investigation of immunization delivery costs, collecting empirical data on routine infant immunization in Benin, Ghana, Honduras, Moldova, Uganda, and Zambia. We developed a pooled dataset from individual EPIC country studies (316 sites). We regressed log total costs against explanatory variables describing service volume, quality, access, other site characteristics, and income level. We used Bayesian hierarchical regression models to combine data from different countries and account for the multi-stage sample design. We calculated output elasticity as the percentage increase in outputs (service volume) for a 1% increase in inputs (total costs), averaged across the sample in each country, and reported first differences to describe the impact of other predictors. We estimated average and total cost curves for each country as a function of service volume. Across countries, average costs per dose ranged from $2.75 to $13.63. Average costs per child receiving diphtheria, tetanus, and pertussis ranged from $27 to $139. Within countries costs per dose varied widely-on average, sites in the highest quintile were 440% more expensive than those in the lowest quintile. In each country, higher service volume was strongly associated with lower average costs. A doubling of service volume was associated with a 19% (95% interval, 4.0-32) reduction in costs per dose delivered, (range 13% to 32% across countries), and the largest 20% of sites in each country realized costs per dose that were on average 61% lower than those for the smallest 20% of sites, controlling for other factors. Other factors associated with higher costs included hospital status, provision of outreach services, share of effort to management, level of staff training

  18. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Using autoregressive integrated moving average (ARIMA models to predict and monitor the number of beds occupied during a SARS outbreak in a tertiary hospital in Singapore

    Directory of Open Access Journals (Sweden)

    Earnest Arul

    2005-05-01

    Full Text Available Abstract Background The main objective of this study is to apply autoregressive integrated moving average (ARIMA models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. Methods This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. Results We found that the ARIMA (1,0,3 model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. Conclusion ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious

  20. Using autoregressive integrated moving average (ARIMA) models to predict and monitor the number of beds occupied during a SARS outbreak in a tertiary hospital in Singapore.

    Science.gov (United States)

    Earnest, Arul; Chen, Mark I; Ng, Donald; Sin, Leo Yee

    2005-05-11

    The main objective of this study is to apply autoregressive integrated moving average (ARIMA) models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases) and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. We found that the ARIMA (1,0,3) model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE) for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious diseases as well.

  1. Can We Use Regression Modeling to Quantify Mean Annual Streamflow at a Global-Scale?

    Science.gov (United States)

    Barbarossa, V.; Huijbregts, M. A. J.; Hendriks, J. A.; Beusen, A.; Clavreul, J.; King, H.; Schipper, A.

    2016-12-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF using observations of discharge and catchment characteristics from 1,885 catchments worldwide, ranging from 2 to 106 km2 in size. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB [van Beek et al., 2011] by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area, mean annual precipitation and air temperature, average slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error values were lower (0.29 - 0.38 compared to 0.49 - 0.57) and the modified index of agreement was higher (0.80 - 0.83 compared to 0.72 - 0.75). Our regression model can be applied globally at any point of the river network, provided that the input parameters are within the range of values employed in the calibration of the model. The performance is reduced for water scarce regions and further research should focus on improving such an aspect for regression-based global hydrological models.

  2. Spatio-Temporal Queries for moving objects Data warehousing

    OpenAIRE

    Esheiba, Leila; Mokhtar, Hoda M. O.; El-Sharkawi, Mohamed

    2013-01-01

    In the last decade, Moving Object Databases (MODs) have attracted a lot of attention from researchers. Several research works were conducted to extend traditional database techniques to accommodate the new requirements imposed by the continuous change in location information of moving objects. Managing, querying, storing, and mining moving objects were the key research directions. This extensive interest in moving objects is a natural consequence of the recent ubiquitous location-aware device...

  3. Adjuvant corneal crosslinking to prevent hyperopic LASIK regression

    Directory of Open Access Journals (Sweden)

    Aslanides IM

    2013-03-01

    Full Text Available Ioannis M Aslanides, Achyut N MukherjeeEmmetropia Mediterranean Eye Clinic, Heraklion, Crete, GreecePurpose: To report the long term outcomes, safety, stability, and efficacy in a pilot series of simultaneous hyperopic laser assisted in situ keratomileusis (LASIK and corneal crosslinking (CXL.Method: A small cohort series of five eyes, with clinically suboptimal topography and/or thickness, underwent LASIK surgery with immediate riboflavin application under the flap, followed by UV light irradiation. Postoperative assessment was performed at 1, 3, 6, and 12 months, with late follow up at 4 years, and results were compared with a matched cohort that received LASIK only.Results: The average age of the LASIK-CXL group was 39 years (26–46, and the average spherical equivalent hyperopic refractive error was +3.45 diopters (standard deviation 0.76; range 2.5 to 4.5. All eyes maintained refractive stability over the 4 years. There were no complications related to CXL, and topographic and clinical outcomes were as expected for standard LASIK.Conclusion: This limited series suggests that simultaneous LASIK and CXL for hyperopia is safe. Outcomes of the small cohort suggest that this technique may be promising for ameliorating hyperopic regression, presumed to be biomechanical in origin, and may also address ectasia risk.Keyword: CXL

  4. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  5. Energy flux correlations and moving mirrors

    International Nuclear Information System (INIS)

    Ford, L.H.; Roman, Thomas A.

    2004-01-01

    We study the quantum stress tensor correlation function for a massless scalar field in a flat two-dimensional spacetime containing a moving mirror. We construct the correlation functions for right-moving and left-moving fluxes for an arbitrary trajectory, and then specialize them to the case of a mirror trajectory for which the expectation value of the stress tensor describes a pair of delta-function pulses, one of negative energy and one of positive energy. The flux correlation function describes the fluctuations around this mean stress tensor, and reveals subtle changes in the correlations between regions where the mean flux vanishes

  6. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  7. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  8. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Comparison of regression coefficient and GIS-based methodologies for regional estimates of forest soil carbon stocks

    International Nuclear Information System (INIS)

    Elliott Campbell, J.; Moen, Jeremie C.; Ney, Richard A.; Schnoor, Jerald L.

    2008-01-01

    Estimates of forest soil organic carbon (SOC) have applications in carbon science, soil quality studies, carbon sequestration technologies, and carbon trading. Forest SOC has been modeled using a regression coefficient methodology that applies mean SOC densities (mass/area) to broad forest regions. A higher resolution model is based on an approach that employs a geographic information system (GIS) with soil databases and satellite-derived landcover images. Despite this advancement, the regression approach remains the basis of current state and federal level greenhouse gas inventories. Both approaches are analyzed in detail for Wisconsin forest soils from 1983 to 2001, applying rigorous error-fixing algorithms to soil databases. Resulting SOC stock estimates are 20% larger when determined using the GIS method rather than the regression approach. Average annual rates of increase in SOC stocks are 3.6 and 1.0 million metric tons of carbon per year for the GIS and regression approaches respectively. - Large differences in estimates of soil organic carbon stocks and annual changes in stocks for Wisconsin forestlands indicate a need for validation from forthcoming forest surveys

  10. Reliable classification of moving waste materials with LIBS in concrete recycling.

    Science.gov (United States)

    Xia, Han; Bakker, M C M

    2014-03-01

    Effective discrimination between different waste materials is of paramount importance for inline quality inspection of recycle concrete aggregates from demolished buildings. The moving targeted materials in the concrete waste stream are wood, PVC, gypsum block, glass, brick, steel rebar, aggregate and cement paste. For each material, up to three different types were considered, while thirty particles of each material were selected. Proposed is a reliable classification methodology based on integration of the LIBS spectral emissions in a fixed time window, starting from the deployment of the laser shot. PLS-DA (multi class) and the hybrid combination PCA-Adaboost (binary class) were investigated as efficient classifiers. In addition, mean centre and auto scaling approaches were compared for both classifiers. Using 72 training spectra and 18 test spectra per material, each averaged by ten shots, only PLS-DA achieved full discrimination, and the mean centre approach made it slightly more robust. Continuing with PLS-DA, the relation between data averaging and convergence to 0.3% average error was investigated using 9-fold cross-validations. Single-shot PLS-DA presented the highest challenge and most desirable methodology, which converged with 59 PC. The degree of success in practical testing will depend on the quality of the training set and the implications of the possibly remaining false positives. © 2013 Published by Elsevier B.V.

  11. Valuing avoided morbidity using meta-regression analysis: what can health status measures and QALYs tell us about WTP?

    Science.gov (United States)

    Van Houtven, George; Powers, John; Jessup, Amber; Yang, Jui-Chen

    2006-08-01

    Many economists argue that willingness-to-pay (WTP) measures are most appropriate for assessing the welfare effects of health changes. Nevertheless, the health evaluation literature is still dominated by studies estimating nonmonetary health status measures (HSMs), which are often used to assess changes in quality-adjusted life years (QALYs). Using meta-regression analysis, this paper combines results from both WTP and HSM studies applied to acute morbidity, and it tests whether a systematic relationship exists between HSM and WTP estimates. We analyze over 230 WTP estimates from 17 different studies and find evidence that QALY-based estimates of illness severity--as measured by the Quality of Well-Being (QWB) Scale--are significant factors in explaining variation in WTP, as are changes in the duration of illness and the average income and age of the study populations. In addition, we test and reject the assumption of a constant WTP per QALY gain. We also demonstrate how the estimated meta-regression equations can serve as benefit transfer functions for policy analysis. By specifying the change in duration and severity of the acute illness and the characteristics of the affected population, we apply the regression functions to predict average WTP per case avoided. Copyright 2006 John Wiley & Sons, Ltd.

  12. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Science.gov (United States)

    Guns, M.; Vanacker, V.

    2012-06-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  13. Undergraduate grade point average and graduate record examination scores: the experience of one graduate nursing program.

    Science.gov (United States)

    Newton, Sarah E; Moore, Gary

    2007-01-01

    Graduate nursing programs frequently use undergraduate grade point average (UGPA) and Graduate Record Examination (GRE) scores for admission decisions. The literature indicates that both UGPA and GRE scores are predictive of graduate school success, but that UGPA may be the better predictor. If that is so, one must ask if both are necessary for graduate nursing admission decisions. This article presents research on one graduate nursing program's experience with UGPA and GRE scores and offers a perspective regarding their continued usefulness for graduate admission decisions. Data from 120 graduate students were examined, and regression analysis indicated that UGPA significantly predicted GRE verbal and quantitative scores (p < .05). Regression analysis also determined a UGPA score above which the GRE provided little additional useful data for graduate nursing admission decisions.

  14. Testing and modelling autoregressive conditional heteroskedasticity of streamflow processes

    Directory of Open Access Journals (Sweden)

    W. Wang

    2005-01-01

    Full Text Available Conventional streamflow models operate under the assumption of constant variance or season-dependent variances (e.g. ARMA (AutoRegressive Moving Average models for deseasonalized streamflow series and PARMA (Periodic AutoRegressive Moving Average models for seasonal streamflow series. However, with McLeod-Li test and Engle's Lagrange Multiplier test, clear evidences are found for the existence of autoregressive conditional heteroskedasticity (i.e. the ARCH (AutoRegressive Conditional Heteroskedasticity effect, a nonlinear phenomenon of the variance behaviour, in the residual series from linear models fitted to daily and monthly streamflow processes of the upper Yellow River, China. It is shown that the major cause of the ARCH effect is the seasonal variation in variance of the residual series. However, while the seasonal variation in variance can fully explain the ARCH effect for monthly streamflow, it is only a partial explanation for daily flow. It is also shown that while the periodic autoregressive moving average model is adequate in modelling monthly flows, no model is adequate in modelling daily streamflow processes because none of the conventional time series models takes the seasonal variation in variance, as well as the ARCH effect in the residuals, into account. Therefore, an ARMA-GARCH (Generalized AutoRegressive Conditional Heteroskedasticity error model is proposed to capture the ARCH effect present in daily streamflow series, as well as to preserve seasonal variation in variance in the residuals. The ARMA-GARCH error model combines an ARMA model for modelling the mean behaviour and a GARCH model for modelling the variance behaviour of the residuals from the ARMA model. Since the GARCH model is not followed widely in statistical hydrology, the work can be a useful addition in terms of statistical modelling of daily streamflow processes for the hydrological community.

  15. A theory of traffic congestion at moving bottlenecks

    Energy Technology Data Exchange (ETDEWEB)

    Kerner, Boris S [Daimler AG, GR/PTF, HPC: G021, 71059 Sindelfingen (Germany); Klenov, Sergey L, E-mail: boris.kerner@daimler.co [Department of Physics, Moscow Institute of Physics and Technology, 141700 Dolgoprudny, Moscow Region (Russian Federation)

    2010-10-22

    The physics of traffic congestion occurring at a moving bottleneck on a multi-lane road is revealed based on the numerical analyses of vehicular traffic with a discrete stochastic traffic flow model in the framework of three-phase traffic theory. We find that there is a critical speed of a moving bottleneck at which traffic breakdown, i.e. a first-order phase transition from free flow to synchronized flow, occurs spontaneously at the moving bottleneck, if the flow rate upstream of the bottleneck is great enough. The greater the flow rate, the higher the critical speed of the moving bottleneck. A diagram of congested traffic patterns at the moving bottleneck is found, which shows regions in the flow-rate-moving-bottleneck-speed plane in which congested patterns emerge spontaneously or can be induced through large enough disturbances in an initial free flow. A comparison of features of traffic breakdown and resulting congested patterns at the moving bottleneck with known ones at an on-ramp (and other motionless) bottleneck is made. Nonlinear features of complex interactions and transformations of congested traffic patterns occurring at on- and off-ramp bottlenecks due to the existence of the moving bottleneck are found. The physics of the phenomenon of traffic congestion due to 'elephant racing' on a multi-lane road is revealed.

  16. A theory of traffic congestion at moving bottlenecks

    International Nuclear Information System (INIS)

    Kerner, Boris S; Klenov, Sergey L

    2010-01-01

    The physics of traffic congestion occurring at a moving bottleneck on a multi-lane road is revealed based on the numerical analyses of vehicular traffic with a discrete stochastic traffic flow model in the framework of three-phase traffic theory. We find that there is a critical speed of a moving bottleneck at which traffic breakdown, i.e. a first-order phase transition from free flow to synchronized flow, occurs spontaneously at the moving bottleneck, if the flow rate upstream of the bottleneck is great enough. The greater the flow rate, the higher the critical speed of the moving bottleneck. A diagram of congested traffic patterns at the moving bottleneck is found, which shows regions in the flow-rate-moving-bottleneck-speed plane in which congested patterns emerge spontaneously or can be induced through large enough disturbances in an initial free flow. A comparison of features of traffic breakdown and resulting congested patterns at the moving bottleneck with known ones at an on-ramp (and other motionless) bottleneck is made. Nonlinear features of complex interactions and transformations of congested traffic patterns occurring at on- and off-ramp bottlenecks due to the existence of the moving bottleneck are found. The physics of the phenomenon of traffic congestion due to 'elephant racing' on a multi-lane road is revealed.

  17. Modal Analysis of an Offshore Platform using Two Different ARMA Approaches

    DEFF Research Database (Denmark)

    Brincker, Rune; Andersen, P.; Martinez, M. E.

    1996-01-01

    In the present investigation, multi-channel response measurements on an offshore platform subjected to wave loads is analysed using Auto Regressive Moving Average (ARMA) models. Two different estimation schemes are used and the results are compared. In the first approach, a scalar ARMA model...

  18. Use of Time-Series, ARIMA Designs to Assess Program Efficacy.

    Science.gov (United States)

    Braden, Jeffery P.; And Others

    1990-01-01

    Illustrates use of time-series designs for determining efficacy of interventions with fictitious data describing drug-abuse prevention program. Discusses problems and procedures associated with time-series data analysis using Auto Regressive Integrated Moving Averages (ARIMA) models. Example illustrates application of ARIMA analysis for…

  19. Understanding logistic regression analysis.

    Science.gov (United States)

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

  20. Robust Determinants of Growth in Asian Developing Economies: A Bayesian Panel Data Model Averaging Approach

    OpenAIRE

    LEON-GONZALEZ, Roberto; VINAYAGATHASAN, Thanabalasingam

    2013-01-01

    This paper investigates the determinants of growth in the Asian developing economies. We use Bayesian model averaging (BMA) in the context of a dynamic panel data growth regression to overcome the uncertainty over the choice of control variables. In addition, we use a Bayesian algorithm to analyze a large number of competing models. Among the explanatory variables, we include a non-linear function of inflation that allows for threshold effects. We use an unbalanced panel data set of 27 Asian ...

  1. Predicting respiratory tumor motion with multi-dimensional adaptive filters and support vector regression

    International Nuclear Information System (INIS)

    Riaz, Nadeem; Wiersma, Rodney; Mao Weihua; Xing Lei; Shanker, Piyush; Gudmundsson, Olafur; Widrow, Bernard

    2009-01-01

    Intra-fraction tumor tracking methods can improve radiation delivery during radiotherapy sessions. Image acquisition for tumor tracking and subsequent adjustment of the treatment beam with gating or beam tracking introduces time latency and necessitates predicting the future position of the tumor. This study evaluates the use of multi-dimensional linear adaptive filters and support vector regression to predict the motion of lung tumors tracked at 30 Hz. We expand on the prior work of other groups who have looked at adaptive filters by using a general framework of a multiple-input single-output (MISO) adaptive system that uses multiple correlated signals to predict the motion of a tumor. We compare the performance of these two novel methods to conventional methods like linear regression and single-input, single-output adaptive filters. At 400 ms latency the average root-mean-square-errors (RMSEs) for the 14 treatment sessions studied using no prediction, linear regression, single-output adaptive filter, MISO and support vector regression are 2.58, 1.60, 1.58, 1.71 and 1.26 mm, respectively. At 1 s, the RMSEs are 4.40, 2.61, 3.34, 2.66 and 1.93 mm, respectively. We find that support vector regression most accurately predicts the future tumor position of the methods studied and can provide a RMSE of less than 2 mm at 1 s latency. Also, a multi-dimensional adaptive filter framework provides improved performance over single-dimension adaptive filters. Work is underway to combine these two frameworks to improve performance.

  2. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  3. THE GENDER PAY GAP IN VIETNAM, 1993-2002: A QUANTILE REGRESSION APPROACH

    OpenAIRE

    Pham, Hung T; Reilly, Barry

    2007-01-01

    This paper uses mean and quantile regression analysis to investigate the gender pay gap for the wage employed in Vietnam over the period 1993 to 2002. It finds that the Doi moi reforms appear to have been associated with a sharp reduction in gender pay gap disparities for the wage employed. The average gender pay gap in this sector halved between 1993 and 2002 with most of the contraction evident by 1998. There has also been a narrowing in the gender pay gap at most selected points of the con...

  4. The Gender Pay Gap In Vietnam, 1993-2002: A Quantile Regression Approach

    OpenAIRE

    Barry Reilly & T. Hung Pham

    2006-01-01

    This paper uses mean and quantile regression analysis to investigate the gender pay gap for the wage employed in Vietnam over the period 1993 to 2002. It finds that the Doi moi reforms have been associated with a sharp reduction in gender wage disparities for the wage employed. The average gender pay gap in this sector halved between 1993 and 2002 with most of the contraction evident by 1998. There has also been a contraction in the gender pay at most selected points of the conditional wage d...

  5. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  6. Development of planning level transportation safety tools using Geographically Weighted Poisson Regression.

    Science.gov (United States)

    Hadayeghi, Alireza; Shalaby, Amer S; Persaud, Bhagwant N

    2010-03-01

    A common technique used for the calibration of collision prediction models is the Generalized Linear Modeling (GLM) procedure with the assumption of Negative Binomial or Poisson error distribution. In this technique, fixed coefficients that represent the average relationship between the dependent variable and each explanatory variable are estimated. However, the stationary relationship assumed may hide some important spatial factors of the number of collisions at a particular traffic analysis zone. Consequently, the accuracy of such models for explaining the relationship between the dependent variable and the explanatory variables may be suspected since collision frequency is likely influenced by many spatially defined factors such as land use, demographic characteristics, and traffic volume patterns. The primary objective of this study is to investigate the spatial variations in the relationship between the number of zonal collisions and potential transportation planning predictors, using the Geographically Weighted Poisson Regression modeling technique. The secondary objective is to build on knowledge comparing the accuracy of Geographically Weighted Poisson Regression models to that of Generalized Linear Models. The results show that the Geographically Weighted Poisson Regression models are useful for capturing spatially dependent relationships and generally perform better than the conventional Generalized Linear Models. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. Multiple regression technique for Pth degree polynominals with and without linear cross products

    Science.gov (United States)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  8. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Directory of Open Access Journals (Sweden)

    M. Guns

    2012-06-01

    Full Text Available Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  9. Model averaging in the analysis of leukemia mortality among Japanese A-bomb survivors

    International Nuclear Information System (INIS)

    Richardson, David B.; Cole, Stephen R.

    2012-01-01

    Epidemiological studies often include numerous covariates, with a variety of possible approaches to control for confounding of the association of primary interest, as well as a variety of possible models for the exposure-response association of interest. Walsh and Kaiser (Radiat Environ Biophys 50:21-35, 2011) advocate a weighted averaging of the models, where the weights are a function of overall model goodness of fit and degrees of freedom. They apply this method to analyses of radiation-leukemia mortality associations among Japanese A-bomb survivors. We caution against such an approach, noting that the proposed model averaging approach prioritizes the inclusion of covariates that are strong predictors of the outcome, but which may be irrelevant as confounders of the association of interest, and penalizes adjustment for covariates that are confounders of the association of interest, but may contribute little to overall model goodness of fit. We offer a simple illustration of how this approach can lead to biased results. The proposed model averaging approach may also be suboptimal as way to handle competing model forms for an exposure-response association of interest, given adjustment for the same set of confounders; alternative approaches, such as hierarchical regression, may provide a more useful way to stabilize risk estimates in this setting. (orig.)

  10. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  11. Moving in Circles

    DEFF Research Database (Denmark)

    Simonsen, Gunvor

    2008-01-01

    The article examines the development of African diaspora history during the last fifty years. It outlines the move from a focus on African survivals to a focus on deep rooted cultural principles and back again to a revived interest in concrete cultural transfers from Africa to the Americas....... This circular movement can be explained by a combination of elements characterizing African Atlantic and black Atlantic history. Among them is a lack of attention to questions of periodisation and change. Likewise, it has proven difficult to conceptualize Africa and America at one and the same time...... as characterized by cultural diversity and variation. Moreover, the field has been haunted by a tendency of moving to easily from descriptive evidence to conclusions about African identity in the Americas. A promising way to overcome these problems, it is suggested, is to develop research that focuses on single...

  12. Peak and averaged bicoherence for different EEG patterns during general anaesthesia

    Directory of Open Access Journals (Sweden)

    Myles Paul

    2010-11-01

    Full Text Available Abstract Background Changes in nonlinear neuronal mechanisms of EEG generation in the course of general anaesthesia have been extensively investigated in research literature. A number of EEG signal properties capable of tracking these changes have been reported and employed in anaesthetic depth monitors. The degree of phase coupling between different spectral components is a marker of nonlinear EEG generators and is claimed to be an important aspect of BIS. While bicoherence is the most direct measure of phase coupling, according to published research it is not directly used in the calculation of BIS, and only limited studies of its association with anaesthetic depth and level of consciousness have been published. This paper investigates bicoherence parameters across equal band and unequal band bifrequency regions, during different states of anaesthetic depth relating to routine clinical anaesthesia, as determined by visual inspection of EEG. Methods 41 subjects scheduled for day surgery under general anaesthesia were recruited into this study. EEG bicoherence was analysed using average and smoothed-peak estimates calculated over different regions on the bifrequency plane. Statistical analysis of associations between anaesthetic depth/state of consciousness and bicoherence estimates included linear regression using generalised linear mixed effects models (GLMs, ROC curves and prediction probability (Pk. Results Bicoherence estimates for the δ_θ region on the bifrequency plane were more sensitive to anaesthetic depth changes compared to other bifrequency regions. Smoothed-peak bicoherence displayed stronger associations than average bicoherence. Excluding burst suppression and large transients, the δ_θ peak bicoherence was significantly associated with level of anaesthetic depth (z = 25.74, p 2 = 0.191. Estimates of Pk for this parameter were 0.889(0.867-0.911 and 0.709(0.689-0.729 respectively for conscious states and anaesthetic depth

  13. Codimension-two bifurcation of axial loaded beam bridge subjected to an infinite series of moving loads

    International Nuclear Information System (INIS)

    Yang Xin-Wei; Tian Rui-Lan; Li Hai-Tao

    2013-01-01

    A novel model is proposed which comprises of a beam bridge subjected to an axial load and an infinite series of moving loads. The moving loads, whose distance between the neighbouring ones is the length of the beam bridge, coupled with the axial force can lead the vibration of the beam bridge to codimension-two bifurcation. Of particular concern is a parameter regime where non-persistence set regions undergo a transition to persistence regions. The boundary of each stripe represents a bifurcation which can drive the system off a kind of dynamics and jump to another one, causing damage due to the resulting amplitude jumps. The Galerkin method, averaging method, invertible linear transformation, and near identity nonlinear transformations are used to obtain the universal unfolding for the codimension-two bifurcation of the mid-span deflection. The efficiency of the theoretical analysis obtained in this paper is verified via numerical simulations. (general)

  14. On the XFEL Schrödinger Equation: Highly Oscillatory Magnetic Potentials and Time Averaging

    KAUST Repository

    Antonelli, Paolo

    2014-01-14

    We analyse a nonlinear Schrödinger equation for the time-evolution of the wave function of an electron beam, interacting selfconsistently through a Hartree-Fock nonlinearity and through the repulsive Coulomb interaction of an atomic nucleus. The electrons are supposed to move under the action of a time dependent, rapidly periodically oscillating electromagnetic potential. This can be considered a simplified effective single particle model for an X-ray free electron laser. We prove the existence and uniqueness for the Cauchy problem and the convergence of wave-functions to corresponding solutions of a Schrödinger equation with a time-averaged Coulomb potential in the high frequency limit for the oscillations of the electromagnetic potential. © 2014 Springer-Verlag Berlin Heidelberg.

  15. Moving ring reactor 'Karin-1'

    International Nuclear Information System (INIS)

    1983-12-01

    The conceptual design of a moving ring reactor ''Karin-1'' has been carried out to advance fusion system design, to clarify the research and development problems, and to decide their priority. In order to attain these objectives, a D-T reactor with tritium breeding blanket is designed, a commercial reactor with net power output of 500 MWe is designed, the compatibility of plasma physics with fusion engineering is demonstrated, and some other guideline is indicated. A moving ring reactor is composed mainly of three parts. In the first formation section, a plasma ring is formed and heated up to ignition temperature. The plasma ring of compact torus is transported from the formation section through the next burning section to generate fusion power. Then the plasma ring moves into the last recovery section, and the energy and particles of the plasma ring are recovered. The outline of a moving ring reactor ''Karin-1'' is described. As a candidate material for the first wall, SiC was adopted to reduce the MHD effect and to minimize the interaction with neutrons and charged particles. The thin metal lining was applied to the SiC surface to solve the problem of the compatibility with lithium blanket. Plasma physics, the engineering aspect and the items of research and development are described. (Kako, I.)

  16. Determination of benzo(apyrene content in PM10 using regression methods

    Directory of Open Access Journals (Sweden)

    Jacek Gębicki

    2015-12-01

    Full Text Available The paper presents an attempt of application of multidimensional linear regression to estimation of an empirical model describing the factors influencing on B(aP content in suspended dust PM10 in Olsztyn and Elbląg city regions between 2010 and 2013. During this period annual average concentration of B(aP in PM10 exceeded the admissible level 1.5-3 times. Conducted investigations confirm that the reasons of B(aP concentration increase are low-efficiency individual home heat stations or low-temperature heat sources, which are responsible for so-called low emission during heating period. Dependences between the following quantities were analysed: concentration of PM10 dust in air, air temperature, wind velocity, air humidity. A measure of model fitting to actual B(aP concentration in PM10 was the coefficient of determination of the model. Application of multidimensional linear regression yielded the equations characterized by high values of the coefficient of determination of the model, especially during heating season. This parameter ranged from 0.54 to 0.80 during the analyzed period.

  17. Moving vertices to make drawings plane

    NARCIS (Netherlands)

    Goaoc, X.; Kratochvil, J.; Okamoto, Y.; Shin, C.S.; Wolff, A.; Hong, S.K.; Nishizeki, T.; Quan, W.

    2008-01-01

    In John Tantalo’s on-line game Planarity the player is given a non-plane straight-line drawing of a planar graph. The aim is to make the drawing plane as quickly as possible by moving vertices. In this paper we investigate the related problem MinMovedVertices which asks for the minimum number of

  18. Representativeness of single lidar stations for zonally averaged ozone profiles, their trends and attribution to proxies

    Directory of Open Access Journals (Sweden)

    C. Zerefos

    2018-05-01

    Full Text Available This paper is focusing on the representativeness of single lidar stations for zonally averaged ozone profile variations over the middle and upper stratosphere. From the lower to the upper stratosphere, ozone profiles from single or grouped lidar stations correlate well with zonal means calculated from the Solar Backscatter Ultraviolet Radiometer (SBUV satellite overpasses. The best representativeness with significant correlation coefficients is found within ±15° of latitude circles north or south of any lidar station. This paper also includes a multivariate linear regression (MLR analysis on the relative importance of proxy time series for explaining variations in the vertical ozone profiles. Studied proxies represent variability due to influences outside of the earth system (solar cycle and within the earth system, i.e. dynamic processes (the Quasi Biennial Oscillation, QBO; the Arctic Oscillation, AO; the Antarctic Oscillation, AAO; the El Niño Southern Oscillation, ENSO, those due to volcanic aerosol (aerosol optical depth, AOD, tropopause height changes (including global warming and those influences due to anthropogenic contributions to atmospheric chemistry (equivalent effective stratospheric chlorine, EESC. Ozone trends are estimated, with and without removal of proxies, from the total available 1980 to 2015 SBUV record. Except for the chemistry related proxy (EESC and its orthogonal function, the removal of the other proxies does not alter the significance of the estimated long-term trends. At heights above 15 hPa an inflection point between 1997 and 1999 marks the end of significant negative ozone trends, followed by a recent period between 1998 and 2015 with positive ozone trends. At heights between 15 and 40 hPa the pre-1998 negative ozone trends tend to become less significant as we move towards 2015, below which the lower stratosphere ozone decline continues in agreement with findings of recent literature.

  19. Representativeness of single lidar stations for zonally averaged ozone profiles, their trends and attribution to proxies

    Science.gov (United States)

    Zerefos, Christos; Kapsomenakis, John; Eleftheratos, Kostas; Tourpali, Kleareti; Petropavlovskikh, Irina; Hubert, Daan; Godin-Beekmann, Sophie; Steinbrecht, Wolfgang; Frith, Stacey; Sofieva, Viktoria; Hassler, Birgit

    2018-05-01

    This paper is focusing on the representativeness of single lidar stations for zonally averaged ozone profile variations over the middle and upper stratosphere. From the lower to the upper stratosphere, ozone profiles from single or grouped lidar stations correlate well with zonal means calculated from the Solar Backscatter Ultraviolet Radiometer (SBUV) satellite overpasses. The best representativeness with significant correlation coefficients is found within ±15° of latitude circles north or south of any lidar station. This paper also includes a multivariate linear regression (MLR) analysis on the relative importance of proxy time series for explaining variations in the vertical ozone profiles. Studied proxies represent variability due to influences outside of the earth system (solar cycle) and within the earth system, i.e. dynamic processes (the Quasi Biennial Oscillation, QBO; the Arctic Oscillation, AO; the Antarctic Oscillation, AAO; the El Niño Southern Oscillation, ENSO), those due to volcanic aerosol (aerosol optical depth, AOD), tropopause height changes (including global warming) and those influences due to anthropogenic contributions to atmospheric chemistry (equivalent effective stratospheric chlorine, EESC). Ozone trends are estimated, with and without removal of proxies, from the total available 1980 to 2015 SBUV record. Except for the chemistry related proxy (EESC) and its orthogonal function, the removal of the other proxies does not alter the significance of the estimated long-term trends. At heights above 15 hPa an inflection point between 1997 and 1999 marks the end of significant negative ozone trends, followed by a recent period between 1998 and 2015 with positive ozone trends. At heights between 15 and 40 hPa the pre-1998 negative ozone trends tend to become less significant as we move towards 2015, below which the lower stratosphere ozone decline continues in agreement with findings of recent literature.

  20. Autowaves in moving excitable media

    Directory of Open Access Journals (Sweden)

    V.A.Davydov

    2004-01-01

    Full Text Available Within the framework of kinematic theory of autowaves we suggest a method for analytic description of stationary autowave structures appearing at the boundary between the moving and fixed excitable media. The front breakdown phenomenon is predicted for such structures. Autowave refraction and, particulary, one-side "total reflection" at the boundary is considered. The obtained analytical results are confirmed by computer simulations. Prospects of the proposed method for further studies of autowave dynamics in the moving excitable media are discussed.

  1. Infinite games with uncertain moves

    Directory of Open Access Journals (Sweden)

    Nicholas Asher

    2013-03-01

    Full Text Available We study infinite two-player games where one of the players is unsure about the set of moves available to the other player. In particular, the set of moves of the other player is a strict superset of what she assumes it to be. We explore what happens to sets in various levels of the Borel hierarchy under such a situation. We show that the sets at every alternate level of the hierarchy jump to the next higher level.

  2. Post-processing through linear regression

    Science.gov (United States)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  3. Can math beat gamers in Quantum Moves?

    OpenAIRE

    Sels, Dries

    2017-01-01

    Abstract: In a recent work on quantum state preparation, Sørensen and co-workers [Nature (London) 532, 210 (2016)] explore the possibility of using video games to help design quantum control protocols. The authors present a game called Quantum Moves (https://www.scienceathome.org/games/quantum-moves/) in which gamers have to move an atom from A to B by means of optical tweezers. They report that, players succeed where purely numerical optimization fails. Moreover, by harnessing the player str...

  4. Monthly reservoir inflow forecasting using a new hybrid SARIMA ...

    Indian Academy of Sciences (India)

    Seasonal autoregressive integrated moving average (SARIMA) models have been frequently ... studied the perfor- mance of stochastic models against ANN models in ..... the model parameters and Se is the standard error. 2.1.3 Nonlinear term ...... ison of regression, ARIMA and ANN models for reservoir inflow forecasting ...

  5. Modal Analysis of an Offshore Platform Using Two Different ARMA Approaches

    DEFF Research Database (Denmark)

    Brincker, Rune; Andersen, P.; Martinez, M. E.

    In the present investigation, multi-channel response measurements on an offshore platform subjected to wave loads is analysed using Auto regressive Moving Average(ARMA) models. two different estimation schemes are used and the results are compared. In the first approach, a scalar ARMA model is us...

  6. System Identification of Civil Engineering Structures using State Space and ARMAV Models

    DEFF Research Database (Denmark)

    Andersen, P.; Kirkegaard, Poul Henning; Brincker, Rune

    In this paper the relations between an ambient excited structural system, represented by an innovation state space system, and the Auto-Regressive Moving Average Vector (ARMAV) model are considered. It is shown how to obtain a multivariate estimate of the ARMAV model from output measurements, usi...

  7. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  8. Better Autologistic Regression

    Directory of Open Access Journals (Sweden)

    Mark A. Wolters

    2017-11-01

    Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

  9. Soil organic carbon distribution in Mediterranean areas under a climate change scenario via multiple linear regression analysis.

    Science.gov (United States)

    Olaya-Abril, Alfonso; Parras-Alcántara, Luis; Lozano-García, Beatriz; Obregón-Romero, Rafael

    2017-08-15

    Over time, the interest on soil studies has increased due to its role in carbon sequestration in terrestrial ecosystems, which could contribute to decreasing atmospheric CO 2 rates. In many studies, independent variables were related to soil organic carbon (SOC) alone, however, the contribution degree of each variable with the experimentally determined SOC content were not considered. In this study, samples from 612 soil profiles were obtained in a natural protected (Red Natura 2000) of Sierra Morena (Mediterranean area, South Spain), considering only the topsoil 0-25cm, for better comparison between results. 24 independent variables were used to define it relationship with SOC content. Subsequently, using a multiple linear regression analysis, the effects of these variables on the SOC correlation was considered. Finally, the best parameters determined with the regression analysis were used in a climatic change scenario. The model indicated that SOC in a future scenario of climate change depends on average temperature of coldest quarter (41.9%), average temperature of warmest quarter (34.5%), annual precipitation (22.2%) and annual average temperature (1.3%). When the current and future situations were compared, the SOC content in the study area was reduced a 35.4%, and a trend towards migration to higher latitude and altitude was observed. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  11. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

  12. LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions

    Directory of Open Access Journals (Sweden)

    Weihua An

    2016-07-01

    Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.

  13. Limit theorems for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Lachièze-Rey, Raphaël; Podolskij, Mark

    of the kernel function g at 0. First order asymptotic theory essentially comprise three cases: stable convergence towards a certain infinitely divisible distribution, an ergodic type limit theorem and convergence in probability towards an integrated random process. We also prove the second order limit theorem...

  14. Human-animal interactions and safety during dairy cattle handling--Comparing moving cows to milking and hoof trimming.

    Science.gov (United States)

    Lindahl, C; Pinzke, S; Herlin, A; Keeling, L J

    2016-03-01

    Cattle handling is a dangerous activity on dairy farms, and cows are a major cause of injuries to livestock handlers. Even if dairy cows are generally tranquil and docile, when situations occur that they perceive or remember as aversive, they may become agitated and hazardous to handle. This study aimed to compare human-animal interactions, cow behavior, and handler safety when moving cows to daily milking and moving cows to more rarely occurring and possibly aversive hoof trimming. These processes were observed on 12 Swedish commercial dairy farms. The study included behavioral observations of handler and cows and cow heart rate recordings, as well as recording frequencies of situations and incidents related to an increased injury risk to the handler. At milking, cows were quite easily moved using few interactions. As expected, the cows showed no behavioral signs of stress, fear, or resistance and their heart rate only rose slightly from the baseline (i.e., the average heart rate during an undisturbed period before handling). Moving cows to hoof trimming involved more forceful and gentle interactions compared with moving cows to milking. Furthermore, the cows showed much higher frequencies of behaviors indicative of aversion and fear (e.g., freezing, balking, and resistance), as well as a higher increase in heart rate. The risk of injury to which handlers were exposed also increased when moving cows to hoof trimming rather than to routine milking. Some interactions (such as forceful tactile interactions with an object and pulling a neck strap or halter) appeared to be related to potentially dangerous incidents where the handler was being kicked, head-butted, or run over by a cow. In conclusion, moving cows to hoof trimming resulted in higher frequencies of behaviors indicating fear, more forceful interactions, and increased injury risks to the handler than moving cows to milking. Improving potentially stressful handling procedures (e.g., by better animal handling

  15. Comparison of multinomial logistic regression and logistic regression: which is more efficient in allocating land use?

    Science.gov (United States)

    Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun

    2014-12-01

    Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.

  16. Disease Mapping and Regression with Count Data in the Presence of Overdispersion and Spatial Autocorrelation: A Bayesian Model Averaging Approach

    Science.gov (United States)

    Mohebbi, Mohammadreza; Wolfe, Rory; Forbes, Andrew

    2014-01-01

    This paper applies the generalised linear model for modelling geographical variation to esophageal cancer incidence data in the Caspian region of Iran. The data have a complex and hierarchical structure that makes them suitable for hierarchical analysis using Bayesian techniques, but with care required to deal with problems arising from counts of events observed in small geographical areas when overdispersion and residual spatial autocorrelation are present. These considerations lead to nine regression models derived from using three probability distributions for count data: Poisson, generalised Poisson and negative binomial, and three different autocorrelation structures. We employ the framework of Bayesian variable selection and a Gibbs sampling based technique to identify significant cancer risk factors. The framework deals with situations where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. The evidence from applying the modelling methodology suggests that modelling strategies based on the use of generalised Poisson and negative binomial with spatial autocorrelation work well and provide a robust basis for inference. PMID:24413702

  17. Logistic regression analysis to predict Medical Licensing Examination of Thailand (MLET) Step1 success or failure.

    Science.gov (United States)

    Wanvarie, Samkaew; Sathapatayavongs, Boonmee

    2007-09-01

    The aim of this paper was to assess factors that predict students' performance in the Medical Licensing Examination of Thailand (MLET) Step1 examination. The hypothesis was that demographic factors and academic records would predict the students' performance in the Step1 Licensing Examination. A logistic regression analysis of demographic factors (age, sex and residence) and academic records [high school grade point average (GPA), National University Entrance Examination Score and GPAs of the pre-clinical years] with the MLET Step1 outcome was accomplished using the data of 117 third-year Ramathibodi medical students. Twenty-three (19.7%) students failed the MLET Step1 examination. Stepwise logistic regression analysis showed that the significant predictors of MLET Step1 success/failure were residence background and GPAs of the second and third preclinical years. For students whose sophomore and third-year GPAs increased by an average of 1 point, the odds of passing the MLET Step1 examination increased by a factor of 16.3 and 12.8 respectively. The minimum GPAs for students from urban and rural backgrounds to pass the examination were estimated from the equation (2.35 vs 2.65 from 4.00 scale). Students from rural backgrounds and/or low-grade point averages in their second and third preclinical years of medical school are at risk of failing the MLET Step1 examination. They should be given intensive tutorials during the second and third pre-clinical years.

  18. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  19. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  20. Regression modeling of ground-water flow

    Science.gov (United States)

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  1. Ready, set, move!

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    This year, the CERN Medical Service is launching a new public health campaign. Advertised by the catchphrase “Move! & Eat Better”, the particular aim of the campaign is to encourage people at CERN to take more regular exercise, of whatever kind.   The CERN annual relay race is scheduled on 24 May this year. The CERN Medical Service will officially launch its “Move! & Eat Better” campaign at this popular sporting event. “We shall be on hand on the day of the race to strongly advocate regular physical activity,” explains Rachid Belkheir, one of the Medical Service doctors. "We really want to pitch our campaign and answer any questions people may have. Above all we want to set an example. So we are going to walk the same circuit as the runners to underline to people that they can easily incorporate movement into their daily routine.” An underlying concern has prompted this campaign: during their first few year...

  2. IMPROVING CORRELATION FUNCTION FITTING WITH RIDGE REGRESSION: APPLICATION TO CROSS-CORRELATION RECONSTRUCTION

    International Nuclear Information System (INIS)

    Matthews, Daniel J.; Newman, Jeffrey A.

    2012-01-01

    Cross-correlation techniques provide a promising avenue for calibrating photometric redshifts and determining redshift distributions using spectroscopy which is systematically incomplete (e.g., current deep spectroscopic surveys fail to obtain secure redshifts for 30%-50% or more of the galaxies targeted). In this paper, we improve on the redshift distribution reconstruction methods from our previous work by incorporating full covariance information into our correlation function fits. Correlation function measurements are strongly covariant between angular or spatial bins, and accounting for this in fitting can yield substantial reduction in errors. However, frequently the covariance matrices used in these calculations are determined from a relatively small set (dozens rather than hundreds) of subsamples or mock catalogs, resulting in noisy covariance matrices whose inversion is ill-conditioned and numerically unstable. We present here a method of conditioning the covariance matrix known as ridge regression which results in a more well behaved inversion than other techniques common in large-scale structure studies. We demonstrate that ridge regression significantly improves the determination of correlation function parameters. We then apply these improved techniques to the problem of reconstructing redshift distributions. By incorporating full covariance information, applying ridge regression, and changing the weighting of fields in obtaining average correlation functions, we obtain reductions in the mean redshift distribution reconstruction error of as much as ∼40% compared to previous methods. We provide a description of POWERFIT, an IDL code for performing power-law fits to correlation functions with ridge regression conditioning that we are making publicly available.

  3. Detection of sensor degradation using K-means clustering and support vector regression in nuclear power plant

    International Nuclear Information System (INIS)

    Seo, Inyong; Ha, Bokam; Lee, Sungwoo; Shin, Changhoon; Lee, Jaeyong; Kim, Seongjun

    2011-01-01

    In a nuclear power plant (NPP), periodic sensor calibrations are required to assure sensors are operating correctly. However, only a few faulty sensors are found to be rectified. For the safe operation of an NPP and the reduction of unnecessary calibration, on-line calibration monitoring is needed. In this study, an on-line calibration monitoring called KPCSVR using k-means clustering and principal component based Auto-Associative support vector regression (PCSVR) is proposed for nuclear power plant. To reduce the training time of the model, k-means clustering method was used. Response surface methodology is employed to efficiently determine the optimal values of support vector regression hyperparameters. The proposed KPCSVR model was confirmed with actual plant data of Kori Nuclear Power Plant Unit 3 which were measured from the primary and secondary systems of the plant, and compared with the PCSVR model. By using data clustering, the average accuracy of PCSVR improved from 1.228×10 -4 to 0.472×10 -4 and the average sensitivity of PCSVR from 0.0930 to 0.0909, which results in good detection of sensor drift. Moreover, the training time is greatly reduced from 123.5 to 31.5 sec. (author)

  4. Camouflage, detection and identification of moving targets.

    Science.gov (United States)

    Hall, Joanna R; Cuthill, Innes C; Baddeley, Roland; Shohet, Adam J; Scott-Samuel, Nicholas E

    2013-05-07

    Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation-detection, identification and capture-in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely 'break' camouflage.

  5. Post-processing through linear regression

    Directory of Open Access Journals (Sweden)

    B. Van Schaeybroeck

    2011-03-01

    Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.

    These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  6. A comparison of random forest regression and multiple linear regression for prediction in neuroscience.

    Science.gov (United States)

    Smith, Paul F; Ganesh, Siva; Liu, Ping

    2013-10-30

    Regression is a common statistical tool for prediction in neuroscience. However, linear regression is by far the most common form of regression used, with regression trees receiving comparatively little attention. In this study, the results of conventional multiple linear regression (MLR) were compared with those of random forest regression (RFR), in the prediction of the concentrations of 9 neurochemicals in the vestibular nucleus complex and cerebellum that are part of the l-arginine biochemical pathway (agmatine, putrescine, spermidine, spermine, l-arginine, l-ornithine, l-citrulline, glutamate and γ-aminobutyric acid (GABA)). The R(2) values for the MLRs were higher than the proportion of variance explained values for the RFRs: 6/9 of them were ≥ 0.70 compared to 4/9 for RFRs. Even the variables that had the lowest R(2) values for the MLRs, e.g. ornithine (0.50) and glutamate (0.61), had much lower proportion of variance explained values for the RFRs (0.27 and 0.49, respectively). The RSE values for the MLRs were lower than those for the RFRs in all but two cases. In general, MLRs seemed to be superior to the RFRs in terms of predictive value and error. In the case of this data set, MLR appeared to be superior to RFR in terms of its explanatory value and error. This result suggests that MLR may have advantages over RFR for prediction in neuroscience with this kind of data set, but that RFR can still have good predictive value in some cases. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Logistic regression applied to natural hazards: rare event logistic regression with replications

    OpenAIRE

    Guns, M.; Vanacker, Veerle

    2012-01-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logisti...

  8. Assessing the reliability of the borderline regression method as a standard setting procedure for objective structured clinical examination

    Directory of Open Access Journals (Sweden)

    Sara Mortaz Hejri

    2013-01-01

    Full Text Available Background: One of the methods used for standard setting is the borderline regression method (BRM. This study aims to assess the reliability of BRM when the pass-fail standard in an objective structured clinical examination (OSCE was calculated by averaging the BRM standards obtained for each station separately. Materials and Methods: In nine stations of the OSCE with direct observation the examiners gave each student a checklist score and a global score. Using a linear regression model for each station, we calculated the checklist score cut-off on the regression equation for the global scale cut-off set at 2. The OSCE pass-fail standard was defined as the average of all station′s standard. To determine the reliability, the root mean square error (RMSE was calculated. The R2 coefficient and the inter-grade discrimination were calculated to assess the quality of OSCE. Results: The mean total test score was 60.78. The OSCE pass-fail standard and its RMSE were 47.37 and 0.55, respectively. The R2 coefficients ranged from 0.44 to 0.79. The inter-grade discrimination score varied greatly among stations. Conclusion: The RMSE of the standard was very small indicating that BRM is a reliable method of setting standard for OSCE, which has the advantage of providing data for quality assurance.

  9. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  10. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  11. Applying Moving Objects Patterns towards Estimating Future Stocks Direction

    Directory of Open Access Journals (Sweden)

    Galal Dahab

    2016-01-01

    Full Text Available Stock is gaining vast popularity as a strategic investment tool not just by investor bankers, but also by the average worker. Large capitals are being traded within the stock market all around the world, making its impact not only macro economically focused, but also greatly valued taking into consideration its direct social impact. As a result, almost 66% of all American citizens are striving in their respective fields every day, trying to come up with better ways to predict and find patterns in stocks that could enhance their estimation and visualization so as to have the opportunity to take better investment decisions. Given the amount of effort that has been put into enhancing stock prediction techniques, there is still a factor that is almost completely neglected when handling stocks. The factor that has been obsolete for so long is in fact the effect of a correlation existing between stocks of the same index or parent company. This paper proposes a distinct approach for studying the correlation between stocks that belong to the same index by modelling stocks as moving objects to be able to track their movements while considering their relationships. Furthermore, it studies one of the movement techniques applied to moving objects to predict stock movement. The results yielded that both the movement technique and correlation coefficient technique are consistent in directions, with minor variations in values. The variations are attributed to the fact that the movement technique takes into consideration the sibling relationship

  12. Designing components using smartMOVE electroactive polymer technology

    Science.gov (United States)

    Rosenthal, Marcus; Weaber, Chris; Polyakov, Ilya; Zarrabi, Al; Gise, Peter

    2008-03-01

    Designing components using SmartMOVE TM electroactive polymer technology requires an understanding of the basic operation principles and the necessary design tools for integration into actuator, sensor and energy generation applications. Artificial Muscle, Inc. is collaborating with OEMs to develop customized solutions for their applications using smartMOVE. SmartMOVE is an advanced and elegant way to obtain almost any kind of movement using dielectric elastomer electroactive polymers. Integration of this technology offers the unique capability to create highly precise and customized motion for devices and systems that require actuation. Applications of SmartMOVE include linear actuators for medical, consumer and industrial applications, such as pumps, valves, optical or haptic devices. This paper will present design guidelines for selecting a smartMOVE actuator design to match the stroke, force, power, size, speed, environmental and reliability requirements for a range of applications. Power supply and controller design and selection will also be introduced. An overview of some of the most versatile configuration options will be presented with performance comparisons. A case example will include the selection, optimization, and performance overview of a smartMOVE actuator for the cell phone camera auto-focus and proportional valve applications.

  13. Moving backwards, moving forward: the experiences of older Filipino migrants adjusting to life in New Zealand

    OpenAIRE

    Montayre, Jed; Neville, Stephen; Holroyd, Eleanor

    2017-01-01

    ABSTRACT Purpose: To explore the experiences of older Filipino migrants adjusting to living permanently in New Zealand. Method: The qualitative descriptive approach taken in this study involved 17 individual face-to-face interviews of older Filipino migrants in New Zealand. Results: Three main themes emerged from the data. The first theme was ?moving backwards and moving forward?, which described how these older Filipino migrants adjusted to challenges they experienced with migration. The sec...

  14. A comparison of the Angstrom-type correlations and the estimation of monthly average daily global irradiation

    International Nuclear Information System (INIS)

    Jain, S.; Jain, P.C.

    1985-12-01

    Linear regression analysis of the monthly average daily global irradiation and the sunshine duration data of 8 Zambian locations has been performed using the least square technique. Good correlation (r>0.95) is obtained in all the cases showing that the Angstrom equation is valid for Zambian locations. The values of the correlation parameters thus obtained show substantial unsystematic scatter. The analysis was repeated after incorporating the effects of (i) multiple reflections of radiation between the ground and the atmosphere, and (ii) not burning of the sunshine recorder chart, into the Angstrom equation. The surface albedo measurements at Lusaka were used. The scatter in the correlation parameters was investigated by graphical representation, by regression analysis of the data of the individual stations as well as the combined data of the 8 stations. The results show that the incorporation of none of the two effects reduces the scatter significantly. A single linear equation obtained from the regression analysis of the combined data of the 8 stations is found to be appropriate for estimating the global irradiation over Zambian locations with reasonable accuracy from the sunshine duration data. (author)

  15. Meta-regression analysis of commensal and pathogenic Escherichia coli survival in soil and water.

    Science.gov (United States)

    Franz, Eelco; Schijven, Jack; de Roda Husman, Ana Maria; Blaak, Hetty

    2014-06-17

    The extent to which pathogenic and commensal E. coli (respectively PEC and CEC) can survive, and which factors predominantly determine the rate of decline, are crucial issues from a public health point of view. The goal of this study was to provide a quantitative summary of the variability in E. coli survival in soil and water over a broad range of individual studies and to identify the most important sources of variability. To that end, a meta-regression analysis on available literature data was conducted. The considerable variation in reported decline rates indicated that the persistence of E. coli is not easily predictable. The meta-analysis demonstrated that for soil and water, the type of experiment (laboratory or field), the matrix subtype (type of water and soil), and temperature were the main factors included in the regression analysis. A higher average decline rate in soil of PEC compared with CEC was observed. The regression models explained at best 57% of the variation in decline rate in soil and 41% of the variation in decline rate in water. This indicates that additional factors, not included in the current meta-regression analysis, are of importance but rarely reported. More complete reporting of experimental conditions may allow future inference on the global effects of these variables on the decline rate of E. coli.

  16. [Multiple linear regression analysis of X-ray measurement and WOMAC scores of knee osteoarthritis].

    Science.gov (United States)

    Ma, Yu-Feng; Wang, Qing-Fu; Chen, Zhao-Jun; Du, Chun-Lin; Li, Jun-Hai; Huang, Hu; Shi, Zong-Ting; Yin, Yue-Shan; Zhang, Lei; A-Di, Li-Jiang; Dong, Shi-Yu; Wu, Ji

    2012-05-01

    To perform Multiple Linear Regression analysis of X-ray measurement and WOMAC scores of knee osteoarthritis, and to analyze their relationship with clinical and biomechanical concepts. From March 2011 to July 2011, 140 patients (250 knees) were reviewed, including 132 knees in the left and 118 knees in the right; ranging in age from 40 to 71 years, with an average of 54.68 years. The MB-RULER measurement software was applied to measure femoral angle, tibial angle, femorotibial angle, joint gap angle from antero-posterir and lateral position of X-rays. The WOMAC scores were also collected. Then multiple regression equations was applied for the linear regression analysis of correlation between the X-ray measurement and WOMAC scores. There was statistical significance in the regression equation of AP X-rays value and WOMAC scores (Pregression equation of lateral X-ray value and WOMAC scores (P>0.05). 1) X-ray measurement of knee joint can reflect the WOMAC scores to a certain extent. 2) It is necessary to measure the X-ray mechanical axis of knee, which is important for diagnosis and treatment of osteoarthritis. 3) The correlation between tibial angle,joint gap angle on antero-posterior X-ray and WOMAC scores is significant, which can be used to assess the functional recovery of patients before and after treatment.

  17. Numerical investigation on the regression rate of hybrid rocket motor with star swirl fuel grain

    Science.gov (United States)

    Zhang, Shuai; Hu, Fan; Zhang, Weihua

    2016-10-01

    Although hybrid rocket motor is prospected to have distinct advantages over liquid and solid rocket motor, low regression rate and insufficient efficiency are two major disadvantages which have prevented it from being commercially viable. In recent years, complex fuel grain configurations are attractive in overcoming the disadvantages with the help of Rapid Prototyping technology. In this work, an attempt has been made to numerically investigate the flow field characteristics and local regression rate distribution inside the hybrid rocket motor with complex star swirl grain. A propellant combination with GOX and HTPB has been chosen. The numerical model is established based on the three dimensional Navier-Stokes equations with turbulence, combustion, and coupled gas/solid phase formulations. The calculated fuel regression rate is compared with the experimental data to validate the accuracy of numerical model. The results indicate that, comparing the star swirl grain with the tube grain under the conditions of the same port area and the same grain length, the burning surface area rises about 200%, the spatially averaged regression rate rises as high as about 60%, and the oxidizer can combust sufficiently due to the big vortex around the axis in the aft-mixing chamber. The combustion efficiency of star swirl grain is better and more stable than that of tube grain.

  18. Comparing daily temperature averaging methods: the role of surface and atmosphere variables in determining spatial and seasonal variability

    Science.gov (United States)

    Bernhardt, Jase; Carleton, Andrew M.

    2018-05-01

    The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.

  19. Attention problems and hyperactivity as predictors of college grade point average.

    Science.gov (United States)

    Schwanz, Kerry A; Palm, Linda J; Brallier, Sara A

    2007-11-01

    This study examined the relative contributions of measures of attention problems and hyperactivity to the prediction of college grade point average (GPA). A sample of 316 students enrolled in introductory psychology and sociology classes at a southeastern university completed the BASC-2 Self-Report of Personality College Form. Scores on the attention problems scale and the hyperactivity scale of the BASC-2 were entered into a regression equation as predictors of cumulative GPA. Each of the independent variables made a significant contribution to the prediction of GPA. Attention problem scores alone explained 7% of the variability in GPAs. The addition of hyperactivity scores to the equation produced a 2% increase in explanatory power. The implications of these results for assessing symptoms of inattention and hyperactivity in college students are discussed.

  20. SU-E-J-137: Incorporating Tumor Regression Into Robust Plan Optimization for Head and Neck Radiotherapy

    International Nuclear Information System (INIS)

    Zhang, P; Hu, J; Tyagi, N; Mageras, G; Lee, N; Hunt, M

    2014-01-01

    Purpose: To develop a robust planning paradigm which incorporates a tumor regression model into the optimization process to ensure tumor coverage in head and neck radiotherapy. Methods: Simulation and weekly MR images were acquired for a group of head and neck patients to characterize tumor regression during radiotherapy. For each patient, the tumor and parotid glands were segmented on the MR images and the weekly changes were formulated with an affine transformation, where morphological shrinkage and positional changes are modeled by a scaling factor, and centroid shifts, respectively. The tumor and parotid contours were also transferred to the planning CT via rigid registration. To perform the robust planning, weekly predicted PTV and parotid structures were created by transforming the corresponding simulation structures according to the weekly affine transformation matrix averaged over patients other than him/herself. Next, robust PTV and parotid structures were generated as the union of the simulation and weekly prediction contours. In the subsequent robust optimization process, attainment of the clinical dose objectives was required for the robust PTV and parotids, as well as other organs at risk (OAR). The resulting robust plans were evaluated by looking at the weekly and total accumulated dose to the actual weekly PTV and parotid structures. The robust plan was compared with the original plan based on the planning CT to determine its potential clinical benefit. Results: For four patients, the average weekly change to tumor volume and position was −4% and 1.2 mm laterally-posteriorly. Due to these temporal changes, the robust plans resulted in an accumulated PTV D95 that was, on average, 2.7 Gy higher than the plan created from the planning CT. OAR doses were similar. Conclusion: Integration of a tumor regression model into target delineation and plan robust optimization is feasible and may yield improved tumor coverage. Part of this research is supported

  1. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  2. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  3. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  4. Capacity gains of buffer-aided moving relays

    KAUST Repository

    Zafar, Ammar

    2017-03-14

    This work investigates the gain due to reduction in path loss by deploying buffer-aided moving relaying. In particular, the increase in gain due to moving relays is studied for dual-hop broadcast channels and the bidirectional relay channel. It is shown that the exploited gains in these channels due to buffer-aided relaying can be enhanced by utilizing the fact that a moving relay can communicate with the terminal closest to it and store the data in the buffer and then forward the data to the intended destination when it comes in close proximity with the destination. Numerical results show that for both the considered channels the achievable rates are increased as compared to the case of stationary relays. Numerical results also show that more significant increase in performance is seen when the relay moves to-and-fro between the source and the relay.

  5. Capacity gains of buffer-aided moving relays

    KAUST Repository

    Zafar, Ammar; Shaqfeh, Mohammad; Alnuweiri, Hussein; Alouini, Mohamed-Slim

    2017-01-01

    This work investigates the gain due to reduction in path loss by deploying buffer-aided moving relaying. In particular, the increase in gain due to moving relays is studied for dual-hop broadcast channels and the bidirectional relay channel. It is shown that the exploited gains in these channels due to buffer-aided relaying can be enhanced by utilizing the fact that a moving relay can communicate with the terminal closest to it and store the data in the buffer and then forward the data to the intended destination when it comes in close proximity with the destination. Numerical results show that for both the considered channels the achievable rates are increased as compared to the case of stationary relays. Numerical results also show that more significant increase in performance is seen when the relay moves to-and-fro between the source and the relay.

  6. Identifying the Safety Factors over Traffic Signs in State Roads using a Panel Quantile Regression Approach.

    Science.gov (United States)

    Šarić, Željko; Xu, Xuecai; Duan, Li; Babić, Darko

    2018-06-20

    This study intended to investigate the interactions between accident rate and traffic signs in state roads located in Croatia, and accommodate the heterogeneity attributed to unobserved factors. The data from 130 state roads between 2012 and 2016 were collected from Traffic Accident Database System maintained by the Republic of Croatia Ministry of the Interior. To address the heterogeneity, a panel quantile regression model was proposed, in which quantile regression model offers a more complete view and a highly comprehensive analysis of the relationship between accident rate and traffic signs, while the panel data model accommodates the heterogeneity attributed to unobserved factors. Results revealed that (1) low visibility of material damage (MD) and death or injured (DI) increased the accident rate; (2) the number of mandatory signs and the number of warning signs were more likely to reduce the accident rate; (3)average speed limit and the number of invalid traffic signs per km exhibited a high accident rate. To our knowledge, it's the first attempt to analyze the interactions between accident consequences and traffic signs by employing a panel quantile regression model; by involving the visibility, the present study demonstrates that the low visibility causes a relatively higher risk of MD and DI; It is noteworthy that average speed limit corresponds with accident rate positively; The number of mandatory signs and the number of warning signs are more likely to reduce the accident rate; The number of invalid traffic signs per km are significant for accident rate, thus regular maintenance should be kept for a safer roadway environment.

  7. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  8. [Application of negative binomial regression and modified Poisson regression in the research of risk factors for injury frequency].

    Science.gov (United States)

    Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan

    2011-11-01

    To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.

  9. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  10. PENGARUH PENERAPAN SISTEM MOVING CLASS DAN MOTIVASI BELAJAR TERHADAP HASIL BELAJAR MATA PELAJARAN PENGANTAR ADMINISTRASI PERKANTORAN SISWA KELAS XI PROGRAM KEAHLIAN ADMINISTRASI PERKANTORAN DI SMK NEGERI 9 SEMARANG TAHUN AJARAN 2014/2015

    Directory of Open Access Journals (Sweden)

    Stefhani Tantra Sintara

    2015-11-01

    Full Text Available Tujuan penelitian ini yaitu untuk mengetahui adakah pengaruh penerapan sistem moving class dan motivasi belajar terhadap hasil belajar mata pelajaran pengantar administrasi perkantoran siswa kelas XI secara simultan maupun parsial. Populasi yang diteliti dalam penelitian ini adalah siswa kelas XI Program Keahlian Administrasi Perkantoran tahun ajaran 2014/2015 sebanyak 105 siswa. Peneliti mengambil teknik sensus, yaitu mengambil keseluruhan populasi sebagai objek penelitian. Variabel yang dikaji dalam penelitian ini adalah penerapan sistem moving class (X1, motivasi belajar (X2 dan hasil belajar (Y. Pengumpulan data dilakukan dengan cara kuesioner dan dokumentasi. Teknik analisis data menggunakan analisis deskriptif dan analisis regresi berganda. Hasil dari analisis regresi linear berganda diperoleh persamaan Y = 6,164+0,883X1+0,300X2+ e. Ada pengaruh secara simultan antara penerapan sistem moving class dan motivasi belajar terhadap hasil belajar sebesar 51,8%, sedangkan pengaruh secara parsial penerepan sistem moving class sebesar 28,84% dan motivasi belajar sebesar 10,63%. The problem that had been studied in this research is whether there is influence of moving class implementation and learning motivation toward eleventh grade students’ achievement of introductory office administration lesson simultaneously and partially. The population that had been studied in this research is eleventh grade students of office administration major in the academic year of 2014/2015, as many as 105 students. The researcher took a census technique which take the whole population as the object of the research. An investigated variable in this research is the implementation of moving class system (X1, learning motivation (X2, and the result of the study (Y. The data were collected using questionnaireand documentation. The analysis of the data is usingdescriptive analysis and bifiliar regression analysis. The result of the linear regression analysis obtained an

  11. Development of deformable moving lung phantom to simulate respiratory motion in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jina [Department of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul 137-701 (Korea, Republic of); Lee, Youngkyu [Department of Radiation Oncology, Seoul St. Mary' s Hospital, College of Medicine, The Catholic University of Korea, 137-701, Seoul (Korea, Republic of); Shin, Hunjoo [Department of Radiation Oncology, Inchoen St. Mary' s Hospital College of Medicine, The Catholic University of Korea, Incheon 403-720 (Korea, Republic of); Ji, Sanghoon [Field Robot R& D Group, Korea Institute of Industrial Technology, Ansan 426-910 (Korea, Republic of); Park, Sungkwang [Department of Radiation Oncology, Busan Paik Hospital, Inje University, Busan 614-735 (Korea, Republic of); Kim, Jinyoung [Department of Radiation Oncology, Haeundae Paik Hospital, Inje University, Busan 612-896 (Korea, Republic of); Jang, Hongseok [Department of Radiation Oncology, Seoul St. Mary' s Hospital, College of Medicine, The Catholic University of Korea, 137-701, Seoul (Korea, Republic of); Kang, Youngnam, E-mail: ynkang33@gmail.com [Department of Radiation Oncology, Seoul St. Mary' s Hospital, College of Medicine, The Catholic University of Korea, 137-701, Seoul (Korea, Republic of)

    2016-07-01

    Radiation treatment requires high accuracy to protect healthy organs and destroy the tumor. However, tumors located near the diaphragm constantly move during treatment. Respiration-gated radiotherapy has significant potential for the improvement of the irradiation of tumor sites affected by respiratory motion, such as lung and liver tumors. To measure and minimize the effects of respiratory motion, a realistic deformable phantom is required for use as a gold standard. The purpose of this study was to develop and study the characteristics of a deformable moving lung (DML) phantom, such as simulation, tissue equivalence, and rate of deformation. The rate of change of the lung volume, target deformation, and respiratory signals were measured in this study; they were accurately measured using a realistic deformable phantom. The measured volume difference was 31%, which closely corresponds to the average difference in human respiration, and the target movement was − 30 to + 32 mm. The measured signals accurately described human respiratory signals. This DML phantom would be useful for the estimation of deformable image registration and in respiration-gated radiotherapy. This study shows that the developed DML phantom can exactly simulate the patient's respiratory signal and it acts as a deformable 4-dimensional simulation of a patient's lung with sufficient volume change.

  12. Development of deformable moving lung phantom to simulate respiratory motion in radiotherapy

    International Nuclear Information System (INIS)

    Kim, Jina; Lee, Youngkyu; Shin, Hunjoo; Ji, Sanghoon; Park, Sungkwang; Kim, Jinyoung; Jang, Hongseok; Kang, Youngnam

    2016-01-01

    Radiation treatment requires high accuracy to protect healthy organs and destroy the tumor. However, tumors located near the diaphragm constantly move during treatment. Respiration-gated radiotherapy has significant potential for the improvement of the irradiation of tumor sites affected by respiratory motion, such as lung and liver tumors. To measure and minimize the effects of respiratory motion, a realistic deformable phantom is required for use as a gold standard. The purpose of this study was to develop and study the characteristics of a deformable moving lung (DML) phantom, such as simulation, tissue equivalence, and rate of deformation. The rate of change of the lung volume, target deformation, and respiratory signals were measured in this study; they were accurately measured using a realistic deformable phantom. The measured volume difference was 31%, which closely corresponds to the average difference in human respiration, and the target movement was − 30 to + 32 mm. The measured signals accurately described human respiratory signals. This DML phantom would be useful for the estimation of deformable image registration and in respiration-gated radiotherapy. This study shows that the developed DML phantom can exactly simulate the patient's respiratory signal and it acts as a deformable 4-dimensional simulation of a patient's lung with sufficient volume change.

  13. Modeling and simulation of dust behaviors behind a moving vehicle

    Science.gov (United States)

    Wang, Jingfang

    Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust

  14. Independent variable complexity for regional regression of the flow duration curve in ungauged basins

    Science.gov (United States)

    Fouad, Geoffrey; Skupin, André; Hope, Allen

    2016-04-01

    The flow duration curve (FDC) is one of the most widely used tools to quantify streamflow. Its percentile flows are often required for water resource applications, but these values must be predicted for ungauged basins with insufficient or no streamflow data. Regional regression is a commonly used approach for predicting percentile flows that involves identifying hydrologic regions and calibrating regression models to each region. The independent variables used to describe the physiographic and climatic setting of the basins are a critical component of regional regression, yet few studies have investigated their effect on resulting predictions. In this study, the complexity of the independent variables needed for regional regression is investigated. Different levels of variable complexity are applied for a regional regression consisting of 918 basins in the US. Both the hydrologic regions and regression models are determined according to the different sets of variables, and the accuracy of resulting predictions is assessed. The different sets of variables include (1) a simple set of three variables strongly tied to the FDC (mean annual precipitation, potential evapotranspiration, and baseflow index), (2) a traditional set of variables describing the average physiographic and climatic conditions of the basins, and (3) a more complex set of variables extending the traditional variables to include statistics describing the distribution of physiographic data and temporal components of climatic data. The latter set of variables is not typically used in regional regression, and is evaluated for its potential to predict percentile flows. The simplest set of only three variables performed similarly to the other more complex sets of variables. Traditional variables used to describe climate, topography, and soil offered little more to the predictions, and the experimental set of variables describing the distribution of basin data in more detail did not improve predictions

  15. Socio-demographic predictors and average annual rates of caesarean section in Bangladesh between 2004 and 2014.

    Directory of Open Access Journals (Sweden)

    Md Nuruzzaman Khan

    Full Text Available Globally the rates of caesarean section (CS have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014.Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data.CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years, urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19 and advanced maternal age (≥35, urban location, relatively high socio-economic status, higher education, birth of few children (≤2, antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS.The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.

  16. Socio-demographic predictors and average annual rates of caesarean section in Bangladesh between 2004 and 2014.

    Science.gov (United States)

    Khan, Md Nuruzzaman; Islam, M Mofizul; Shariff, Asma Ahmad; Alam, Md Mahmudul; Rahman, Md Mostafizur

    2017-01-01

    Globally the rates of caesarean section (CS) have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014. Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS) conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data. CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years), urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19) and advanced maternal age (≥35), urban location, relatively high socio-economic status, higher education, birth of few children (≤2), antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS. The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.

  17. Sense of moving

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Grünbaum, Thor

    2017-01-01

    In this chapter, we assume the existence of a sense of “movement activity” that arises when a person actively moves a body part. This sense is usually supposed to be part of sense of agency (SoA). The purpose of the chapter is to determine whether the already existing experimental paradigms can...

  18. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    Science.gov (United States)

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  19. Bayesian ARTMAP for regression.

    Science.gov (United States)

    Sasu, L M; Andonie, R

    2013-10-01

    Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  1. Accounting for Zero Inflation of Mussel Parasite Counts Using Discrete Regression Models

    Directory of Open Access Journals (Sweden)

    Emel Çankaya

    2017-06-01

    Full Text Available In many ecological applications, the absences of species are inevitable due to either detection faults in samples or uninhabitable conditions for their existence, resulting in high number of zero counts or abundance. Usual practice for modelling such data is regression modelling of log(abundance+1 and it is well know that resulting model is inadequate for prediction purposes. New discrete models accounting for zero abundances, namely zero-inflated regression (ZIP and ZINB, Hurdle-Poisson (HP and Hurdle-Negative Binomial (HNB amongst others are widely preferred to the classical regression models. Due to the fact that mussels are one of the economically most important aquatic products of Turkey, the purpose of this study is therefore to examine the performances of these four models in determination of the significant biotic and abiotic factors on the occurrences of Nematopsis legeri parasite harming the existence of Mediterranean mussels (Mytilus galloprovincialis L.. The data collected from the three coastal regions of Sinop city in Turkey showed more than 50% of parasite counts on the average are zero-valued and model comparisons were based on information criterion. The results showed that the probability of the occurrence of this parasite is here best formulated by ZINB or HNB models and influential factors of models were found to be correspondent with ecological differences of the regions.

  2. Scattering characteristics of relativistically moving concentrically layered spheres

    Science.gov (United States)

    Garner, Timothy J.; Lakhtakia, Akhlesh; Breakall, James K.; Bohren, Craig F.

    2018-02-01

    The energy extinction cross section of a concentrically layered sphere varies with velocity as the Doppler shift moves the spectral content of the incident signal in the sphere's co-moving inertial reference frame toward or away from resonances of the sphere. Computations for hollow gold nanospheres show that the energy extinction cross section is high when the Doppler shift moves the incident signal's spectral content in the co-moving frame near the wavelength of the sphere's localized surface plasmon resonance. The energy extinction cross section of a three-layer sphere consisting of an olivine-silicate core surrounded by a porous and a magnetite layer, which is used to explain extinction caused by interstellar dust, also depends strongly on velocity. For this sphere, computations show that the energy extinction cross section is high when the Doppler shift moves the spectral content of the incident signal near either of olivine-silicate's two localized surface phonon resonances at 9.7 μm and 18 μm.

  3. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    Science.gov (United States)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  4. Multicollinearity and Regression Analysis

    Science.gov (United States)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  5. Lattice Boltzmann simulation of behaviour of particles moving in blood vessels under the rolling massage

    International Nuclear Information System (INIS)

    Hou-Hui, Yi; Cai-Feng, Wang; Xiao-Feng, Yang; Hua-Bing, Li

    2009-01-01

    The rolling massage is one of the most important manipulations in Chinese massage, which is expected to eliminate many diseases. Here, the effect of the rolling massage on a pair of particles moving in blood vessels under rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulated results show that the motion of each particle is considerably modified by the rolling massage, and it depends on the relative rolling velocity, the rolling depth, and the distance between particle position and rolling position. Both particles' translational average velocities increase almost linearly as the rolling velocity increases, and obey the same law. The increment of the average relative angular velocity for the leading particle is smaller than that of the trailing one. The result is helpful for understanding the mechanism of the massage and to further develop the rolling techniques. (classical areas of phenomenology)

  6. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  7. PEMODELAN JUMLAH ANAK PUTUS SEKOLAH DI PROVINSI BALI DENGAN PENDEKATAN SEMI-PARAMETRIC GEOGRAPHICALLY WEIGHTED POISSON REGRESSION

    Directory of Open Access Journals (Sweden)

    GUSTI AYU RATIH ASTARI

    2013-11-01

    Full Text Available Dropout number is one of the important indicators to measure the human progress resources in education sector. This research uses the approaches of Semi-parametric Geographically Weighted Poisson Regression to get the best model and to determine the influencing factors of dropout number for primary education in Bali. The analysis results show that there are no significant differences between the Poisson regression model with GWPR and Semi-parametric GWPR. Factors which significantly influence the dropout number for primary education in Bali are the ratio of students to school, ratio of students to teachers, the number of families with the latest educational fathers is elementary or junior high school, illiteracy rates, and the average number of family members.

  8. Credit Scoring Problem Based on Regression Analysis

    OpenAIRE

    Khassawneh, Bashar Suhil Jad Allah

    2014-01-01

    ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

  9. Why superconducting vortices follow to moving hot sport?

    Science.gov (United States)

    Sergeev, Andrei; Michael, Reizer

    Recent experiments reported in Nature Comm. 7, 12801, 2016 show that superconducting vortices follow to the moving hot sport created by a focused laser beam, i.e. vortices move from the cold area to the moving hot area. This behavior is opposite to the vortex motion observed in numerous measurements of the vortex Nernst effect, where vortices always move against the temperature gradient. Taking into account that superconducting magnetization currents do not transfer entropy, we analyze the balance of forces acting on a vortex in stationary and dynamic temperature gradients. We show that the dynamic measurements may be described by a single vortex approximation, while in stationary measurements interaction between vortices is critical. Supported by NRC.

  10. Prediction of protein binding sites using physical and chemical descriptors and the support vector machine regression method

    International Nuclear Information System (INIS)

    Sun Zhong-Hua; Jiang Fan

    2010-01-01

    In this paper a new continuous variable called core-ratio is defined to describe the probability for a residue to be in a binding site, thereby replacing the previous binary description of the interface residue using 0 and 1. So we can use the support vector machine regression method to fit the core-ratio value and predict the protein binding sites. We also design a new group of physical and chemical descriptors to characterize the binding sites. The new descriptors are more effective, with an averaging procedure used. Our test shows that much better prediction results can be obtained by the support vector regression (SVR) method than by the support vector classification method. (rapid communication)

  11. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  12. Flows around a moving flat plate simulated by the method of cellular automata. Seru outoman ho ni yoru ido heiban mawari no nagare

    Energy Technology Data Exchange (ETDEWEB)

    Tsutahara, M; Tomiyama, A; Kimura, T; Murata, H [Kobe University, Kobe (Japan). Faculty of Engineering

    1993-08-25

    In order to analyze the field of flow containing a moving boundary by the method of cellular automaton, the method of giving the boundary conditions in the case where a wall is moving at a constant velocity in the normal direction was examined. This method is used to simulate the movement of continuous fluid by statistically treating the movement of many discrete particles which repeat translation and collision. The collision law of particles at grid points is formulated so as to conserve mass(number of particles) and momentum for the purpose of satisfying the governing equation of flow. The object is the flow in the case where a flat plate moves in the normal direction inside the fluid enclosed by rectangular walls and the plate was assumed that it is first in a standing condition, then starts to move from left to right at a speed of V and stops in front of the right wall. Three boundary conditions, surrounding wall, plate in the standing condition and moving plate, were considered. Flow rates were calculated concerning the translation and collision and each divided mean-field-approximation region(space having a magnitude of capable of averaging operation of particles). Effectiveness of proposed boundary conditions was confirmed by a visualization experiment. 3 refs., 14 figs.

  13. Being Moved: Linguistic Representation and Conceptual Structure

    Directory of Open Access Journals (Sweden)

    Milena eKuehnast

    2014-11-01

    Full Text Available This study explored the organisation of the semantic field and the conceptual structure of moving experiences by investigating German-language expressions referring to the emotional state of being moved. We used present and past participles of eight psychological verbs as primes in a free word-association task, as these grammatical forms place their conceptual focus on the eliciting situation and on the felt emotional state, respectively. By applying a taxonomy of basic knowledge types and computing the Cognitive Salience Index, we identified joy and sadness as key emotional ingredients of being moved, and significant life events and art experiences as main elicitors of this emotional state. Metric multidimensional scaling analyses of the semantic field revealed that the core terms designate a cluster of emotional states characterised by low degrees of arousal and slightly positive valence, the latter due to a nearly balanced representation of positive and negative elements in the conceptual structure of being moved.

  14. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    Science.gov (United States)

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  15. Simultaneous multifractal decompositions for the spectra of local entropies and ergodic averages

    International Nuclear Information System (INIS)

    Meson, Alejandro; Vericat, Fernando

    2009-01-01

    We consider different multifractal decompositions of the form K α i ={x:g i (x)=α i },i=1,2,...,d, and we study the dimension spectrum corresponding to the multiparameter decomposition K α = intersection i=1 d K α i ,α=(α 1 ,...,α d ). Then for an homeomorphism f : X → X and potentials φ, ψ : X → R we analyze the decompositions K α + ={x:lim n→∞ 1/n (S n + (φ))(x)=α},K β - ={x:lim n→∞ 1/n (S n - (ψ))(x)=β}, where 1/n (S n + (φ)),1/n (S n - (ψ)) are ergodic averages using forward and backward orbits of f respectively. We must emphasize that the analysis, in any case, is done without requiring conditions of hyperbolicity for the dynamical system or Hoelder continuity on the potentials. We illustrate with an application to galactic dynamics: a set of stars (which do not interact among them) moving in a galactic field.

  16. Relative Importance for Linear Regression in R: The Package relaimpo

    Directory of Open Access Journals (Sweden)

    Ulrike Gromping

    2006-09-01

    Full Text Available Relative importance is a topic that has seen a lot of interest in recent years, particularly in applied work. The R package relaimpo implements six different metrics for assessing relative importance of regressors in the linear model, two of which are recommended - averaging over orderings of regressors and a newly proposed metric (Feldman 2005 called pmvd. Apart from delivering the metrics themselves, relaimpo also provides (exploratory bootstrap confidence intervals. This paper offers a brief tutorial introduction to the package. The methods and relaimpo’s functionality are illustrated using the data set swiss that is generally available in R. The paper targets readers who have a basic understanding of multiple linear regression. For the background of more advanced aspects, references are provided.

  17. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...... in the theoretical predictive equation by suggesting a data generating process, where returns are generated as linear functions of a lagged latent I(0) risk process. The observed predictor is a function of this latent I(0) process, but it is corrupted by a fractionally integrated noise. Such a process may arise due...... to aggregation or unexpected level shifts. In this setup, the practitioner estimates a misspecified, unbalanced, and endogenous predictive regression. We show that the OLS estimate of this regression is inconsistent, but standard inference is possible. To obtain a consistent slope estimate, we then suggest...

  18. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  19. Autistic Regression

    Science.gov (United States)

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  20. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  1. Can we predict podiatric medical school grade point average using an admission screen?

    Science.gov (United States)

    Shaw, Graham P; Velis, Evelio; Molnar, David

    2012-01-01

    Most medical school admission committees use cognitive and noncognitive measures to inform their final admission decisions. We evaluated using admission data to predict academic success for podiatric medical students using first-semester grade point average (GPA) and cumulative GPA at graduation as outcome measures. In this study, we used linear multiple regression to examine the predictive power of an admission screen. A cross-validation technique was used to assess how the results of the regression model would generalize to an independent data set. Undergraduate GPA and Medical College Admission Test score accounted for only 22% of the variance in cumulative GPA at graduation. Undergraduate GPA, Medical College Admission Test score, and a time trend variable accounted for only 24% of the variance in first-semester GPA. Seventy-five percent of the individual variation in cumulative GPA at graduation and first-semester GPA remains unaccounted for by admission screens that rely on only cognitive measures, such as undergraduate GPA and Medical College Admission Test score. A reevaluation of admission screens is warranted, and medical educators should consider broadening the criteria used to select the podiatric physicians of the future.

  2. Detection of Moving Targets Using Soliton Resonance Effect

    Science.gov (United States)

    Kulikov, Igor K.; Zak, Michail

    2013-01-01

    The objective of this research was to develop a fundamentally new method for detecting hidden moving targets within noisy and cluttered data-streams using a novel "soliton resonance" effect in nonlinear dynamical systems. The technique uses an inhomogeneous Korteweg de Vries (KdV) equation containing moving-target information. Solution of the KdV equation will describe a soliton propagating with the same kinematic characteristics as the target. The approach uses the time-dependent data stream obtained with a sensor in form of the "forcing function," which is incorporated in an inhomogeneous KdV equation. When a hidden moving target (which in many ways resembles a soliton) encounters the natural "probe" soliton solution of the KdV equation, a strong resonance phenomenon results that makes the location and motion of the target apparent. Soliton resonance method will amplify the moving target signal, suppressing the noise. The method will be a very effective tool for locating and identifying diverse, highly dynamic targets with ill-defined characteristics in a noisy environment. The soliton resonance method for the detection of moving targets was developed in one and two dimensions. Computer simulations proved that the method could be used for detection of singe point-like targets moving with constant velocities and accelerations in 1D and along straight lines or curved trajectories in 2D. The method also allows estimation of the kinematic characteristics of moving targets, and reconstruction of target trajectories in 2D. The method could be very effective for target detection in the presence of clutter and for the case of target obscurations.

  3. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  4. How efficient are referral hospitals in Uganda? A data envelopment analysis and tobit regression approach.

    Science.gov (United States)

    Mujasi, Paschal N; Asbu, Eyob Z; Puig-Junoy, Jaume

    2016-07-08

    Hospitals represent a significant proportion of health expenditures in Uganda, accounting for about 26 % of total health expenditure. Improving the technical efficiency of hospitals in Uganda can result in large savings which can be devoted to expand access to services and improve quality of care. This paper explores the technical efficiency of referral hospitals in Uganda during the 2012/2013 financial year. This was a cross sectional study using secondary data. Input and output data were obtained from the Uganda Ministry of Health annual health sector performance report for the period July 1, 2012 to June 30, 2013 for the 14 public sector regional referral and 4 large private not for profit hospitals. We assumed an output-oriented model with Variable Returns to Scale to estimate the efficiency score for each hospital using Data Envelopment Analysis (DEA) with STATA13. Using a Tobit model DEA, efficiency scores were regressed against selected institutional and contextual/environmental factors to estimate their impacts on efficiency. The average variable returns to scale (Pure) technical efficiency score was 91.4 % and the average scale efficiency score was 87.1 % while the average constant returns to scale technical efficiency score was 79.4 %. Technically inefficient hospitals could have become more efficient by increasing the outpatient department visits by 45,943; and inpatient days by 31,425 without changing the total number of inputs. Alternatively, they would achieve efficiency by for example transferring the excess 216 medical staff and 454 beds to other levels of the health system without changing the total number of outputs. Tobit regression indicates that significant factors in explaining hospital efficiency are: hospital size (p Uganda.

  5. Moving backwards, moving forward: the experiences of older Filipino migrants adjusting to life in New Zealand.

    Science.gov (United States)

    Montayre, Jed; Neville, Stephen; Holroyd, Eleanor

    2017-12-01

    To explore the experiences of older Filipino migrants adjusting to living permanently in New Zealand. The qualitative descriptive approach taken in this study involved 17 individual face-to-face interviews of older Filipino migrants in New Zealand. Three main themes emerged from the data. The first theme was "moving backwards and moving forward", which described how these older Filipino migrants adjusted to challenges they experienced with migration. The second theme was "engaging with health services" and presented challenges relating to the New Zealand healthcare system, including a lack of knowledge of the nature of health services, language barriers, and differences in cultural views. The third theme, "new-found home", highlighted establishing a Filipino identity in New Zealand and adjusting to the challenges of relocation. Adjustment to life in New Zealand for these older Filipino migrants meant starting over again by building new values through learning the basics and then moving forward from there.

  6. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  7. Online Risk Prediction for Indoor Moving Objects

    DEFF Research Database (Denmark)

    Ahmed, Tanvir; Pedersen, Torben Bach; Calders, Toon

    2016-01-01

    Technologies such as RFID and Bluetooth have received considerable attention for tracking indoor moving objects. In a time-critical indoor tracking scenario such as airport baggage handling, a bag has to move through a sequence of locations until it is loaded into the aircraft. Inefficiency or in...... reduce the operation cost....

  8. Categorical regression dose-response modeling

    Science.gov (United States)

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  9. So, You Want to Move out?!--An Awareness Program of the Real Costs of Moving Away from Home

    Science.gov (United States)

    Hines, Steven L.; Hansen, Lyle; Falen, Christi

    2011-01-01

    The So, You Want To Move Out?! program was developed to help teens explore the financial realities of moving away from home. This 3-day camp program allows youth the opportunity to interview for a job, work, earn a paycheck, and pay financial obligations. After paying expenses and trying to put some money away in savings, the participants begin to…

  10. Comparison of Classical Linear Regression and Orthogonal Regression According to the Sum of Squares Perpendicular Distances

    OpenAIRE

    KELEŞ, Taliha; ALTUN, Murat

    2016-01-01

    Regression analysis is a statistical technique for investigating and modeling the relationship between variables. The purpose of this study was the trivial presentation of the equation for orthogonal regression (OR) and the comparison of classical linear regression (CLR) and OR techniques with respect to the sum of squared perpendicular distances. For that purpose, the analyses were shown by an example. It was found that the sum of squared perpendicular distances of OR is smaller. Thus, it wa...

  11. Maintenance of order in a moving strong condensate

    International Nuclear Information System (INIS)

    Whitehouse, Justin; Costa, André; Blythe, Richard A; Evans, Martin R

    2014-01-01

    We investigate the conditions under which a moving condensate may exist in a driven mass transport system. Our paradigm is a minimal mass transport model in which n − 1 particles move simultaneously from a site containing n > 1 particles to the neighbouring site in a preferred direction. In the spirit of a zero-range process the rate u(n) of this move depends only on the occupation of the departure site. We study a hopping rate u(n) = 1 + b/n α numerically and find a moving strong condensate phase for b > b c (α) for all α > 0. This phase is characterised by a condensate that moves through the system and comprises a fraction of the system's mass that tends to unity. The mass lost by the condensate as it moves is constantly replenished from the trailing tail of low occupancy sites that collectively comprise a vanishing fraction of the mass. We formulate an approximate analytical treatment of the model that allows a reasonable estimate of b c (α) to be obtained. We show numerically (for α = 1) that the transition is of mixed order, exhibiting a discontinuity in the order parameter as well as a diverging length scale as b↘b c . (paper)

  12. Pathological assessment of liver fibrosis regression

    Directory of Open Access Journals (Sweden)

    WANG Bingqiong

    2017-03-01

    Full Text Available Hepatic fibrosis is the common pathological outcome of chronic hepatic diseases. An accurate assessment of fibrosis degree provides an important reference for a definite diagnosis of diseases, treatment decision-making, treatment outcome monitoring, and prognostic evaluation. At present, many clinical studies have proven that regression of hepatic fibrosis and early-stage liver cirrhosis can be achieved by effective treatment, and a correct evaluation of fibrosis regression has become a hot topic in clinical research. Liver biopsy has long been regarded as the gold standard for the assessment of hepatic fibrosis, and thus it plays an important role in the evaluation of fibrosis regression. This article reviews the clinical application of current pathological staging systems in the evaluation of fibrosis regression from the perspectives of semi-quantitative scoring system, quantitative approach, and qualitative approach, in order to propose a better pathological evaluation system for the assessment of fibrosis regression.

  13. Occupational injuries and sick leaves in household moving works.

    Science.gov (United States)

    Hwan Park, Myoung; Jeong, Byung Yong

    2017-09-01

    This study is concerned with household moving works and the characteristics of occupational injuries and sick leaves in each step of the moving process. Accident data for 392 occupational accidents were categorized by the moving processes in which the accidents occurred, and possible incidents and sick leaves were assessed for each moving process and hazard factor. Accidents occurring during specific moving processes showed different characteristics depending on the type of accident and agency of accidents. The most critical form in the level of risk management was falls from a height in the 'lifting by ladder truck' process. Incidents ranked as a 'High' level of risk management were in the forms of slips, being struck by objects and musculoskeletal disorders in the 'manual materials handling' process. Also, falls in 'loading/unloading', being struck by objects during 'lifting by ladder truck' and driving accidents in the process of 'transport' were ranked 'High'. The findings of this study can be used to develop more effective accident prevention policy reflecting different circumstances and conditions to reduce occupational accidents in household moving works.

  14. 25 CFR 700.157 - Actual reasonable moving and related expenses-nonresidential moves.

    Science.gov (United States)

    2010-04-01

    ... of § 700.151(c) a certified eligible business, farm operation or nonprofit organization is entitled...-moves his business, farm operation, or nonprofit organization, the Commission may approve a payment for... and modifications necessary to adapt such property to the replacement structure or to the utilities or...

  15. Logistic Regression: Concept and Application

    Science.gov (United States)

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  16. The influence of walkability on broader mobility for Canadian middle aged and older adults: An examination of Walk Score™ and the Mobility Over Varied Environments Scale (MOVES).

    Science.gov (United States)

    Hirsch, Jana A; Winters, Meghan; Clarke, Philippa J; Ste-Marie, Nathalie; McKay, Heather A

    2017-02-01

    Neighborhood built environments may play an important role in shaping mobility and subsequent health outcomes. However, little work includes broader mobility considerations such as cognitive ability to be mobile, social connections with community, or transportation choices. We used a population-based sample of Canadian middle aged and older adults (aged 45 and older) from the Canadian Community Health Survey-Healthy Aging (CCHS-HA, 2008-2009) to create a holistic mobility measure: Mobility over Varied Environments Scale (MOVES). Data from CCHS-HA respondents from British Columbia with MOVES were linked with Street Smart Walk Score™ data by postal code (n=2046). Mean MOVES was estimated across sociodemographic and health characteristics. Linear regression, adjusted for relevant covariates, was used to estimate the association between Street Smart Walk Score™ and the MOVES. The mean MOVES was 30.67 (95% confidence interval (CI) 30.36, 30.99), 5th percentile 23.27 (CI 22.16, 24.38) and 95th percentile was 36.93 (CI 35.98, 37.87). MOVES was higher for those who were younger, married, higher socioeconomic status, and had better health. In unadjusted models, for every 10 point increase in Street Smart Walk Score™, MOVES increased 4.84 points (CI 4.52, 5.15). However, results attenuated after adjustment for sociodemographic covariates: each 10 point increase in Street Smart Walk Score™ was associated with a 0.10 (CI 0.00, 0.20) point increase in MOVES. The modest but important link we observed between walkability and mobility highlights the implication of neighborhood design on the health of middle aged and older adults. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Predictors of course in obsessive-compulsive disorder: logistic regression versus Cox regression for recurrent events.

    Science.gov (United States)

    Kempe, P T; van Oppen, P; de Haan, E; Twisk, J W R; Sluis, A; Smit, J H; van Dyck, R; van Balkom, A J L M

    2007-09-01

    Two methods for predicting remissions in obsessive-compulsive disorder (OCD) treatment are evaluated. Y-BOCS measurements of 88 patients with a primary OCD (DSM-III-R) diagnosis were performed over a 16-week treatment period, and during three follow-ups. Remission at any measurement was defined as a Y-BOCS score lower than thirteen combined with a reduction of seven points when compared with baseline. Logistic regression models were compared with a Cox regression for recurrent events model. Logistic regression yielded different models at different evaluation times. The recurrent events model remained stable when fewer measurements were used. Higher baseline levels of neuroticism and more severe OCD symptoms were associated with a lower chance of remission, early age of onset and more depressive symptoms with a higher chance. Choice of outcome time affects logistic regression prediction models. Recurrent events analysis uses all information on remissions and relapses. Short- and long-term predictors for OCD remission show overlap.

  18. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha

    2014-12-08

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  19. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2014-01-01

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  20. Theses "Discussion" Sections: A Structural Move Analysis

    Science.gov (United States)

    Nodoushan, Mohammad Ali Salmani; Khakbaz, Nafiseh

    2011-01-01

    The current study aimed at finding the probable differences between the move structure of Iranian MA graduates' thesis discussion subgenres and those of their non-Iranian counterparts, on the one hand, and those of journal paper authors, on the other. It also aimed at identifying the moves that are considered obligatory, conventional, or optional…