WorldWideScience

Sample records for bayesian smoothing spline

  1. Smoothing quadratic and cubic splines

    OpenAIRE

    Oukropcová, Kateřina

    2014-01-01

    Title: Smoothing quadratic and cubic splines Author: Kateřina Oukropcová Department: Department of Numerical Mathematics Supervisor: RNDr. Václav Kučera, Ph.D., Department of Numerical Mathematics Abstract: The aim of this bachelor thesis is to study the topic of smoothing quadratic and cubic splines on uniform partitions. First, we define the basic con- cepts in the field of splines, next we introduce interpolating splines with a focus on their minimizing properties for odd degree and quadra...

  2. A smoothing algorithm using cubic spline functions

    Science.gov (United States)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  3. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  4. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  5. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  6. Genetic and environmental smoothing of lactation curves with cubic splines.

    Science.gov (United States)

    White, I M; Thompson, R; Brotherstone, S

    1999-03-01

    Most approaches to modeling lactation curves involve parametric curves with fixed or random coefficients. In either case, the resulting models require the specification on an underlying parametric curve. The fitting of splines represents a semiparametric approach to the problem. In the context of animal breeding, cubic smoothing splines are particularly convenient because they can be incorporated into a suitably constructed mixed model. The potential for the use of splines in modeling lactation curves is explored with a simple example, and the results are compared with those using a random regression model. The spline model provides greater flexibility at the cost of additional computation. Splines are shown to be capable of picking up features of the lactation curve that are missed by the random regression model.

  7. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  8. Testing for cubic smoothing splines under dependent data.

    Science.gov (United States)

    Nummi, Tapio; Pan, Jianxin; Siren, Tarja; Liu, Kun

    2011-09-01

    In most research on smoothing splines the focus has been on estimation, while inference, especially hypothesis testing, has received less attention. By defining design matrices for fixed and random effects and the structure of the covariance matrices of random errors in an appropriate way, the cubic smoothing spline admits a mixed model formulation, which places this nonparametric smoother firmly in a parametric setting. Thus nonlinear curves can be included with random effects and random coefficients. The smoothing parameter is the ratio of the random-coefficient and error variances and tests for linear regression reduce to tests for zero random-coefficient variances. We propose an exact F-test for the situation and investigate its performance in a real pine stem data set and by simulation experiments. Under certain conditions the suggested methods can also be applied when the data are dependent. © 2010, The International Biometric Society.

  9. Characterizing vaccine-associated risks using cubic smoothing splines.

    Science.gov (United States)

    Brookhart, M Alan; Walker, Alexander M; Lu, Yun; Polakowski, Laura; Li, Jie; Paeglow, Corrie; Puenpatom, Tosmai; Izurieta, Hector; Daniel, Gregory W

    2012-11-15

    Estimating risks associated with the use of childhood vaccines is challenging. The authors propose a new approach for studying short-term vaccine-related risks. The method uses a cubic smoothing spline to flexibly estimate the daily risk of an event after vaccination. The predicted incidence rates from the spline regression are then compared with the expected rates under a log-linear trend that excludes the days surrounding vaccination. The 2 models are then used to estimate the excess cumulative incidence attributable to the vaccination during the 42-day period after vaccination. Confidence intervals are obtained using a model-based bootstrap procedure. The method is applied to a study of known effects (positive controls) and expected noneffects (negative controls) of the measles, mumps, and rubella and measles, mumps, rubella, and varicella vaccines among children who are 1 year of age. The splines revealed well-resolved spikes in fever, rash, and adenopathy diagnoses, with the maximum incidence occurring between 9 and 11 days after vaccination. For the negative control outcomes, the spline model yielded a predicted incidence more consistent with the modeled day-specific risks, although there was evidence of increased risk of diagnoses of congenital malformations after vaccination, possibly because of a "provider visit effect." The proposed approach may be useful for vaccine safety surveillance.

  10. Modeling and testing treated tumor growth using cubic smoothing splines.

    Science.gov (United States)

    Kong, Maiying; Yan, Jun

    2011-07-01

    Human tumor xenograft models are often used in preclinical study to evaluate the therapeutic efficacy of a certain compound or a combination of certain compounds. In a typical human tumor xenograft model, human carcinoma cells are implanted to subjects such as severe combined immunodeficient (SCID) mice. Treatment with test compounds is initiated after tumor nodule has appeared, and continued for a certain time period. Tumor volumes are measured over the duration of the experiment. It is well known that untreated tumor growth may follow certain patterns, which can be described by certain mathematical models. However, the growth patterns of the treated tumors with multiple treatment episodes are quite complex, and the usage of parametric models is limited. We propose using cubic smoothing splines to describe tumor growth for each treatment group and for each subject, respectively. The proposed smoothing splines are quite flexible in modeling different growth patterns. In addition, using this procedure, we can obtain tumor growth and growth rate over time for each treatment group and for each subject, and examine whether tumor growth follows certain growth pattern. To examine the overall treatment effect and group differences, the scaled chi-squared test statistics based on the fitted group-level growth curves are proposed. A case study is provided to illustrate the application of this method, and simulations are carried out to examine the performances of the scaled chi-squared tests. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Smoothing noisy spectroscopic data with many-knot spline method

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, M.H. [Space Exploration Laboratory, Macau University of Science and Technology, Taipa, Macau (China)], E-mail: peter_zu@163.com; Liu, L.G.; Qi, D.X.; You, Z.; Xu, A.A. [Space Exploration Laboratory, Macau University of Science and Technology, Taipa, Macau (China)

    2008-05-15

    In this paper, we present the development of a many-knot spline method derived to remove the statistical noise in the spectroscopic data. This method is an expansion of the B-spline method. Compared to the B-spline method, the many-knot spline method is significantly faster.

  12. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    Science.gov (United States)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  13. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  14. Clustering Time-Series Gene Expression Data Using Smoothing Spline Derivatives

    Directory of Open Access Journals (Sweden)

    Martin PGP

    2007-01-01

    Full Text Available Microarray data acquired during time-course experiments allow the temporal variations in gene expression to be monitored. An original postprandial fasting experiment was conducted in the mouse and the expression of 200 genes was monitored with a dedicated macroarray at 11 time points between 0 and 72 hours of fasting. The aim of this study was to provide a relevant clustering of gene expression temporal profiles. This was achieved by focusing on the shapes of the curves rather than on the absolute level of expression. Actually, we combined spline smoothing and first derivative computation with hierarchical and partitioning clustering. A heuristic approach was proposed to tune the spline smoothing parameter using both statistical and biological considerations. Clusters are illustrated a posteriori through principal component analysis and heatmap visualization. Most results were found to be in agreement with the literature on the effects of fasting on the mouse liver and provide promising directions for future biological investigations.

  15. Clustering Time-Series Gene Expression Data Using Smoothing Spline Derivatives

    Directory of Open Access Journals (Sweden)

    S. Déjean

    2007-06-01

    Full Text Available Microarray data acquired during time-course experiments allow the temporal variations in gene expression to be monitored. An original postprandial fasting experiment was conducted in the mouse and the expression of 200 genes was monitored with a dedicated macroarray at 11 time points between 0 and 72 hours of fasting. The aim of this study was to provide a relevant clustering of gene expression temporal profiles. This was achieved by focusing on the shapes of the curves rather than on the absolute level of expression. Actually, we combined spline smoothing and first derivative computation with hierarchical and partitioning clustering. A heuristic approach was proposed to tune the spline smoothing parameter using both statistical and biological considerations. Clusters are illustrated a posteriori through principal component analysis and heatmap visualization. Most results were found to be in agreement with the literature on the effects of fasting on the mouse liver and provide promising directions for future biological investigations.

  16. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    Science.gov (United States)

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  17. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    Science.gov (United States)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  18. A B-Spline Framework for Smooth Derivative Computation in Well Test Analysis Using Diagnostic Plots.

    Science.gov (United States)

    Tago, Josué; Hernández-Espriú, Antonio

    2018-01-01

    In the oil and gas industry, well test analysis using derivative plots, has been the core technique in examining reservoir and well behavior over the last three decades. Recently, diagnostics plots have gained recognition in the field of hydrogeology; however, this tool is still underused by groundwater professionals. The foremost drawback is that the derivative function must be computed from noisy field measurements, usually based on finite-difference schemes, which complicates the analysis. We propose a B-spline framework for smooth derivative computation, referred to as Constrained Quartic B-Splines with Free Knots. The approach presents the following novelties in relation to methodological precedents: (1) the use of automatic equality derivative constraints, (2) a knot removal strategy and (3) the introduction of a Boolean shape parameter that defines the number of initial knots to choose. These can lead to evaluate both simple (manually recorded drawdown measurements) and complex (transducer measured records) datasets. Furthermore, we propose an additional shape preserving smoothing preprocess procedure, as a simple, fast and robust method to deal with extremely noisy signals. Our framework was tested in four pumping tests by comparing the spline derivative with regards to the Bourdet algorithm, and we found that the latter is rather noisy (even for large differentiation intervals) and the second derivative response is basically unreadable. In contrast, the spline first and second derivative led to smoother responses, which are more suitable for interpretation. We concluded that the proposed framework is a welcome contribution to evaluate reliable aquifer tests using derivative-diagnostic plots. © 2017, National Ground Water Association.

  19. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  20. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Directory of Open Access Journals (Sweden)

    S. Wüst

    2017-09-01

    Full Text Available Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals – the subtraction of the spline from the original time series – are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  1. Magnetotelluric (MT) data smoothing based on B-Spline algorithm and qualitative spectral analysis

    Science.gov (United States)

    Handyarso, Accep; Grandis, Hendra

    2017-07-01

    Data processing is one of the essential steps to obtain optimum response function of the Earth's subsurface. The MT Data processing is based on the Fast Fourier Transform (FFT) algorithm which converts the time series data into its frequency domain counterpart. The FFT combined with statistical algorithm constitute the Robust Processing algorithm which is widely implemented in MT data processing software. The Robust Processing has three variants, i.e. No Weight (NW), Rho Variance (RV), and Ordinary Coherency (OC). The RV and OC options allow for denoising the data but in many cases the Robust Processing still results in not so smooth sounding curve due to strong noise presence during measurement, such that the Crosspower (XPR) analysis must be conducted in the data processing. The XPR analysis is very time consuming step within the data processing. The collaboration of B-Spline algorithm and Qualitative Spectral Analysis in the frequency domain could be of advantages as an alternative for these steps. The technique is started by using the best coherency from the Robust Processing results. In the Qualitative Spectral Analysis one can determine which part of the data based on frequency that is more or less reliable, then the next process invokes B-Spline algorithm for data smoothing. This algorithm would select the best fit of the data trend in the frequency domain. The smooth apparent resistivity and phase sounding curves can be considered as more appropriate to represent the subsurface. This algorithm has been applied to the real MT data from several survey and give satisfactory results.

  2. Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.

    Science.gov (United States)

    Kainu, Annette; Timonen, Kirsi

    2016-07-01

    Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.

  3. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  4. Average course approximation of measured subsidence and inclinations of mining area by smooth splines

    Directory of Open Access Journals (Sweden)

    Justyna Orwat

    2017-01-01

    Full Text Available The results of marking average courses of subsidence measured on the points of measuring line no. 1 of the “Budryk” Hard Coal Mine, set approximately perpendicularly to a face run of four consecutively mined longwalls in coal bed 338/2 have been presented in the article. Smooth splines were used to approximate the average course of measured subsidence after subsequent exploitation stages. The minimising of the sum of the squared differences between the average and forecasted subsidence, using J. Bialek's formula, was used as a selection criterion of parameter values of smoothing an approximating function. The parameter values of this formula have been chosen in order to match forecasted subsidence with measured ones. The average values of inclinations have been calculated on the basis of approximated values of observed subsidence. It has been shown that by doing this the average values of extreme measured inclinations can be obtained in almost the same way as extreme observed inclinations. It is not necessary to divide the whole profile of a subsidence basin into parts. The obtained values of variability coefficients of a random scattering for subsidence and inclinations are smaller than their values which occur in the literature.

  5. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX.

    Science.gov (United States)

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.

  6. An ultrasound study of Canadian French rhotic vowels with polar smoothing spline comparisons.

    Science.gov (United States)

    Mielke, Jeff

    2015-05-01

    This is an acoustic and articulatory study of Canadian French rhotic vowels, i.e., mid front rounded vowels /ø œ̃ œ/ produced with a rhotic perceptual quality, much like English [ɚ] or [ɹ], leading heureux, commun, and docteur to sound like [ɚʁɚ], [kɔmɚ̃], and [dɔktaɹʁ]. Ultrasound, video, and acoustic data from 23 Canadian French speakers are analyzed using several measures of mid-sagittal tongue contours, showing that the low F3 of rhotic vowels is achieved using bunched and retroflex tongue postures and that the articulatory-acoustic mapping of F1 and F2 are rearranged in systems with rhotic vowels. A subset of speakers' French vowels are compared with their English [ɹ]/[ɚ], revealing that the French vowels are consistently less extreme in low F3 and its articulatory correlates, even for the most rhotic speakers. Polar coordinates are proposed as a replacement for Cartesian coordinates in calculating smoothing spline comparisons of mid-sagittal tongue shapes, because they enable comparisons to be roughly perpendicular to the tongue surface, which is critical for comparisons involving tongue root position but appropriate for all comparisons involving mid-sagittal tongue contours.

  7. Smoothing X-ray spectra with regression splines and fast Fourier transform; Wygladzanie widm promieniowania X metodami regresyjnych funkcji sklejanych i szybkiej transformaty Fouriera

    Energy Technology Data Exchange (ETDEWEB)

    Antoniak, W.; Urbanski, P. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    1996-12-31

    Regression splines and fast Fourier transform (FFT) methods were used for smoothing the X-ray spectra obtained from the proportional counters. The programs for computation and optimization of the smoothed spectra were written in MATLAB languages. It was shown, that application of the smoothed spectra in the multivariate calibration can result in a considerable reduction of measurement errors. (author). 8 refs, 9 figs.

  8. Cubic smoothing splines background correction in on-line liquid chromatography-Fourier transform infrared spectrometry.

    Science.gov (United States)

    Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel

    2010-10-22

    A background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry (LC-FTIR) is proposed. The developed approach applies univariate background correction to each variable (i.e. each wave number) individually. Spectra measured in the region before and after each peak cluster are used as knots to model the variation of the eluent absorption intensity with time using cubic smoothing splines (CSS) functions. The new approach has been successfully tested on simulated as well as on real data sets obtained from injections of standard mixtures of polyethylene glycols with four different molecular weights in methanol:water, 2-propanol:water and ethanol:water gradients ranging from 30 to 90, 10 to 25 and from 10 to 40% (v/v) of organic modifier, respectively. Calibration lines showed high linearity with coefficients of determination higher than 0.98 and limits of detection between 0.4 and 1.4, 0.9 and 1.8, and 1.1 and 2.7 mgmL⁻¹ in methanol:water, 2-propanol:water and ethanol:water, respectively. Furthermore the method performance has been compared with a univariate background correction approach based on the use of a reference spectra matrix (UBC-RSM) to discuss the potential as well as pitfalls and drawbacks of the proposed approach. This method works without previous variable selection and provides minimal user-interaction, thus increasing drastically the feasibility of on-line coupling of gradient LC-FTIR. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Smooth ROC curves and surfaces for markers subject to a limit of detection using monotone natural cubic splines.

    Science.gov (United States)

    Bantis, Leonidas E; Tsimikas, John V; Georgiou, Stelios D

    2013-09-01

    The use of ROC curves in evaluating a continuous or ordinal biomarker for the discrimination of two populations is commonplace. However, in many settings, marker measurements above or below a certain value cannot be obtained. In this paper, we study the construction of a smooth ROC curve (or surface in the case of three populations) when there is a lower or upper limit of detection. We propose the use of spline models that incorporate monotonicity constraints for the cumulative hazard function of the marker distribution. The proposed technique is computationally stable and simulation results showed a satisfactory performance. Other observed covariates can be also accommodated by this spline-based approach. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Extraction of Airways with Probabilistic State-Space Models and Bayesian Smoothing

    DEFF Research Database (Denmark)

    Raghavendra, Selvan; Petersen, Jens; Pedersen, Jesper Johannes Holst

    of elongated branches using probabilistic state-space models and Bayesian smoothing. Unlike most existing methods that proceed with sequential tracking of branches, we present an exploratory method, that is less sensitive to local anomalies in the data due to acquisition noise and/or interfering structures....... The evolution of individual branches is modelled using a process model and the observed data is incorporated into the update step of the Bayesian smoother using a measurement model that is based on a multi-scale blob detector. Bayesian smoothing is performed using the RTS (Rauch-Tung-Striebel) smoother, which...

  11. Data assimilation using Bayesian filters and B-spline geological models

    KAUST Repository

    Duan, Lian

    2011-04-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  12. Estimated breeding values and association mapping for persistency and total milk yield using natural cubic smoothing splines.

    Science.gov (United States)

    Verbyla, Klara L; Verbyla, Arunas P

    2009-11-05

    For dairy producers, a reliable description of lactation curves is a valuable tool for management and selection. From a breeding and production viewpoint, milk yield persistency and total milk yield are important traits. Understanding the genetic drivers for the phenotypic variation of both these traits could provide a means for improving these traits in commercial production. It has been shown that Natural Cubic Smoothing Splines (NCSS) can model the features of lactation curves with greater flexibility than the traditional parametric methods. NCSS were used to model the sire effect on the lactation curves of cows. The sire solutions for persistency and total milk yield were derived using NCSS and a whole-genome approach based on a hierarchical model was developed for a large association study using single nucleotide polymorphisms (SNP). Estimated sire breeding values (EBV) for persistency and milk yield were calculated using NCSS. Persistency EBV were correlated with peak yield but not with total milk yield. Several SNP were found to be associated with both traits and these were used to identify candidate genes for further investigation. NCSS can be used to estimate EBV for lactation persistency and total milk yield, which in turn can be used in whole-genome association studies.

  13. Application of SCM with Bayesian B-Spline to Spatio-Temporal Analysis of Hypertension in China.

    Science.gov (United States)

    Ye, Zirong; Xu, Li; Zhou, Zi; Wu, Yafei; Fang, Ya

    2018-01-02

    Most previous research on the disparities of hypertension risk has neither simultaneously explored the spatio-temporal disparities nor considered the spatial information contained in the samples, thus the estimated results may be unreliable. Our study was based on the China Health and Nutrition Survey (CHNS), including residents over 12 years old in seven provinces from 1991 to 2011. Bayesian B-spline was used in the extended shared component model (SCM) for fitting temporal-related variation to explore spatio-temporal distribution in the odds ratio (OR) of hypertension, reveal gender variation, and explore latent risk factors. Our results revealed that the prevalence of hypertension increased from 14.09% in 1991 to 32.37% in 2011, with men experiencing a more obvious change than women. From a spatial perspective, a standardized prevalence ratio (SPR) remaining at a high level was found in Henan and Shandong for both men and women. Meanwhile, before 1997, the temporal distribution of hypertension risk for both men and women remained low. After that, notably since 2004, the OR of hypertension in each province increased to a relatively high level, especially in Northern China. Notably, the OR of hypertension in Shandong and Jiangsu, which was over 1.2, continuously stood out after 2004 for males, while that in Shandong and Guangxi was relatively high for females. The findings suggested that obvious spatial-temporal patterns for hypertension exist in the regions under research and this pattern was quite different between men and women.

  14. Smoothness as a failure mode of Bayesian mixture models in brain-machine interfaces.

    Science.gov (United States)

    Yousefi, Siamak; Wein, Alex; Kowalski, Kevin C; Richardson, Andrew G; Srinivasan, Lakshminarayan

    2015-01-01

    Various recursive Bayesian filters based on reach state equations (RSE) have been proposed to convert neural signals into reaching movements in brain-machine interfaces. When the target is known, RSE produce exquisitely smooth trajectories relative to the random walk prior in the basic Kalman filter. More realistically, the target is unknown, and gaze analysis or other side information is expected to provide a discrete set of potential targets. In anticipation of this scenario, various groups have implemented RSE-based mixture (hybrid) models, which define a discrete random variable to represent target identity. While principled, this approach sacrifices the smoothness of RSE with known targets. This paper combines empirical spiking data from primary motor cortex and mathematical analysis to explain this loss in performance. We focus on angular velocity as a meaningful and convenient measure of smoothness. Our results demonstrate that angular velocity in the trajectory is approximately proportional to change in target probability. The constant of proportionality equals the difference in heading between parallel filters from the two most probable targets, suggesting a smoothness benefit to more narrowly spaced targets. Simulation confirms that measures to smooth the data likelihood also improve the smoothness of hybrid trajectories, including increased ensemble size and uniformity in preferred directions. We speculate that closed-loop training or neuronal subset selection could be used to shape the user's tuning curves towards this end.

  15. Possibility of using the smoothed spline functions in approximation of average course of terrain inclinations caused by underground mining exploitation conducted at medium depth

    Science.gov (United States)

    Orwat, J.

    2018-01-01

    In this paper was presented an obtainment way of the average values of terrain inclinations caused by an exploitation of the 338/2 coal bed, conducted at medium depth by four longwalls. The inclinations were measured at sections of measuring line established over the excavations, perpendicularly to their runways, after the termination of subsequent exploitation stages. The average courses of measured inclinations were calculated on the basis of average values of measured subsidence obtained as a result of an average-square approximation done by the use of smooth splines, in reference to their theoretical values calculated via the S. Knothe’s and J. Bialek’s formulas. The typical values of parameters of these formulas were used. Thus it was obtained for two average courses after the ending of each exploitation period. The values of standard deviations between average and measured inclinations σI and variability coefficients of random scattering of inclinations MI were calculated. Then they were compared with the values appearing in the literature and based on this the possibility evaluation of use smooth splines to determination of average course of observed inclinations of mining area was conducted.

  16. Improving mouse controlling and movement for people with Parkinson's disease and involuntary tremor using adaptive path smoothing technique via B-spline.

    Science.gov (United States)

    Hashem, Seyed Yashar Bani; Zin, Nor Azan Mat; Yatim, Noor Faezah Mohd; Ibrahim, Norlinah Mohamed

    2014-01-01

    Many input devices are available for interacting with computers, but the computer mouse is still the most popular device for interaction. People who suffer from involuntary tremor have difficulty using the mouse in the normal way. The target participants of this research were individuals who suffer from Parkinson's disease. Tremor in limbs makes accurate mouse movements impossible or difficult without any assistive technologies to help. This study explores a new assistive technique-adaptive path smoothing via B-spline (APSS)-to enhance mouse controlling based on user's tremor level and type. APSS uses Mean filtering and B-spline to provide a smoothed mouse trajectory. Seven participants who have unwanted tremor evaluated APSS. Results show that APSS is very promising and greatly increases their control of the computer mouse. Result of user acceptance test also shows that user perceived APSS as easy to use. They also believe it to be a useful tool and intend to use it once it is available. Future studies could explore the possibility of integrating APSS with one assistive pointing technique, such as the Bubble cursor or the Sticky target technique, to provide an all in one solution for motor disabled users.

  17. Spatial Variation of Seismic B-Values of the Empirical Law of the Magnitude-Frequency Distribution from a Bayesian Approach Based On Spline (B-Spline) Function in the North Anatolian Fault Zone, North of Turkey

    Science.gov (United States)

    Türker, Tugba; Bayrak, Yusuf

    2017-12-01

    In this study, A Bayesian approach based on Spline (B-spline) function is used to estimate the spatial variations of the seismic b-values of the empirical law (G-R law) in the North Anatolian Fault Zone (NAFZ), North of Turkey. B-spline function method developed for estimation and interpolation of b-values. Spatial variations in b-values are known to reflect the stress field and can be used in earthquake hazard analysis. We proposed that b-values combined with seismicity and tectonic background. β=b*ln(10) function (the derivation of the G-R law) based on a Bayesian approach is used to estimate the b values and their standard deviations. A homogeneous instrumental catalog is used during the period 1900-2017. We divided into ten different seismic source regions based on epicenter distribution, tectonic, seismicity, faults in NAFZ. Three historical earthquakes (1343, MS = 7. 5, 1766, Ms=7.3, 1894, MS = 7. 0) are included in region 2 (Marmara Sea (Tekirdağ-Merkez-Kumburgaz-Çmarcik Basins)) where a large earthquake is expected in the near future because of a large earthquake hasn’t been observed for the instrumental period. The spatial variations in ten different seismogenic regions are estimated in NAFZ. In accordance with estimates, b-values are changed between 0.52±0.07 and 0.86±0.13. The high b values are estimated the Southern Branch of NAFZ (Edremit Fault Zones, Yenice-Gönen, Mustafa Kemal Paşa, Ulubat Faults) region, so it is related low stress. The low b values are estimated between Tokat-Erzincan region, so it is related high stress. The maps of 2D and 3D spatial variations (2D contour maps, classed post maps (a group the data into discrete classes), image maps (raster maps based on grid files), 3D wireframe (three-dimensional representations of grid files) and 3D surface) are plotted to the b-values. The spatial variations b-values can be used earthquake hazard analysis for NAFZ.

  18. Interpolating cubic splines

    CERN Document Server

    Knott, Gary D

    2000-01-01

    A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi­ cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi­ nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...

  19. B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Shilpa Dilipkumar

    2015-03-01

    Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.

  20. Bayesian multi-scale smoothing of photon-limited images with applications to astronomy and medicine

    Science.gov (United States)

    White, John

    Multi-scale models for smoothing Poisson signals or images have gained much attention over the past decade. A new Bayesian model is developed using the concept of the Chinese restaurant process to find structures in two-dimensional images when performing image reconstruction or smoothing. This new model performs very well when compared to other leading methodologies for the same problem. It is developed and evaluated theoretically and empirically throughout Chapter 2. The newly developed Bayesian model is extended to three-dimensional images in Chapter 3. The third dimension has numerous different applications, such as different energy spectra, another spatial index, or possibly a temporal dimension. Empirically, this method shows promise in reducing error with the use of simulation studies. A further development removes background noise in the image. This removal can further reduce the error and is done using a modeling adjustment and post-processing techniques. These details are given in Chapter 4. Applications to real world problems are given throughout. Photon-based images are common in astronomical imaging due to the collection of different types of energy such as X-Rays. Applications to real astronomical images are given, and these consist of X-ray images from the Chandra X-ray observatory satellite. Diagnostic medicine uses many types of imaging such as magnetic resonance imaging and computed tomography that can also benefit from smoothing techniques such as the one developed here. Reducing the amount of radiation a patient takes will make images more noisy, but this can be mitigated through the use of image smoothing techniques. Both types of images represent the potential real world use for these methods.

  1. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    International Nuclear Information System (INIS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-01-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m 2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R 2 ), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested

  2. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Energy Technology Data Exchange (ETDEWEB)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  3. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Science.gov (United States)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  4. Spatial smoothing in Bayesian models: a comparison of weights matrix specifications and their impact on inference.

    Science.gov (United States)

    Duncan, Earl W; White, Nicole M; Mengersen, Kerrie

    2017-12-16

    When analysing spatial data, it is important to account for spatial autocorrelation. In Bayesian statistics, spatial autocorrelation is commonly modelled by the intrinsic conditional autoregressive prior distribution. At the heart of this model is a spatial weights matrix which controls the behaviour and degree of spatial smoothing. The purpose of this study is to review the main specifications of the spatial weights matrix found in the literature, and together with some new and less common specifications, compare the effect that they have on smoothing and model performance. The popular BYM model is described, and a simple solution for addressing the identifiability issue among the spatial random effects is provided. Seventeen different definitions of the spatial weights matrix are defined, which are classified into four classes: adjacency-based weights, and weights based on geographic distance, distance between covariate values, and a hybrid of geographic and covariate distances. These last two definitions embody the main novelty of this research. Three synthetic data sets are generated, each representing a different underlying spatial structure. These data sets together with a real spatial data set from the literature are analysed using the models. The models are evaluated using the deviance information criterion and Moran's I statistic. The deviance information criterion indicated that the model which uses binary, first-order adjacency weights to perform spatial smoothing is generally an optimal choice for achieving a good model fit. Distance-based weights also generally perform quite well and offer similar parameter interpretations. The less commonly explored options for performing spatial smoothing generally provided a worse model fit than models with more traditional approaches to smoothing, but usually outperformed the benchmark model which did not conduct spatial smoothing. The specification of the spatial weights matrix can have a colossal impact on model

  5. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  6. Profiling MS proteomics data using smoothed non-linear energy operator and Bayesian additive regression trees.

    Science.gov (United States)

    He, Shan; Li, Xiaoli; Viant, Mark R; Yao, Xin

    2009-09-01

    This paper proposes a novel profiling method for SELDI-TOF and MALDI-TOF MS data that integrates a novel peak detection method based on modified smoothed non-linear energy operator, correlation-based peak selection and Bayesian additive regression trees. The peak detection and classification performance of the proposed approach is validated on two publicly available MS data sets, namely MALDI-TOF simulation data and high-resolution SELDI-TOF ovarian cancer data. The results compared favorably with three state-of-the-art peak detection algorithms and four machine-learning algorithms. For the high-resolution ovarian cancer data set, seven biomarkers (m/z windows) were found by our method, which achieved 97.30 and 99.10% accuracy at 25th and 75th percentiles, respectively, from 50 independent cross-validation samples, which is significantly better than other profiling and dimensional reduction methods. The results show that the method is capable of finding parsimonious sets of biologically meaningful biomarkers with better accuracy than existing methods. Supporting Information material and MATLAB/R scripts to implement the methods described in the article are available at: http://www.cs.bham.ac.uk/szh/SourceCode-for-Proteomics.zip.

  7. Capillary electrophoresis enhanced by automatic two-way background correction using cubic smoothing splines and multivariate data analysis applied to the characterisation of mixtures of surfactants.

    Science.gov (United States)

    Bernabé-Zafón, Virginia; Torres-Lapasió, José R; Ortega-Gadea, Silvia; Simó-Alfonso, Ernesto F; Ramis-Ramos, Guillermo

    2005-02-18

    Mixtures of the surfactant classes coconut diethanolamide, cocamido propyl betaine and alkylbenzene sulfonate were separated by capillary electrophoresis in several media containing organic solvents and anionic solvophobic agents. Good resolution between both the surfactant classes and the homologues within the classes was achieved in a BGE containing 80 mM borate buffer of pH 8.5, 20% n-propanol and 40 mM sodium deoxycholate. Full resolution, assistance in peak assignment to the classes (including the recognition of solutes not belonging to the classes), and improvement of the signal-to-noise ratio was achieved by multivariate data analysis of the time-wavelength electropherograms. Cubic smoothing splines were used to develop an algorithm capable of automatically modelling the two-way background, which increased the sensitivity and reliability of the multivariate analysis of the corrected signal. The exclusion of significant signals from the background model was guaranteed by the conservativeness of the criteria used and the safeguards adopted all along the point selection process, where the CSS algorithm supported the addition of new points to the initially reduced background sample. Efficient background modelling made the application of multivariate deconvolution within extensive time windows possible. This increased the probability of finding quality spectra for each solute class by orthogonal projection approach. The concentration profiles of the classes were improved by subsequent application of alternating least squares. The two-way electropherograms were automatically processed, with minimal supervision by the user, in less than 2 min. The procedure was successfully applied to the identification and quantification of the surfactants in household cleaners.

  8. Diffeomorphism Spline

    Directory of Open Access Journals (Sweden)

    Wei Zeng

    2015-04-01

    Full Text Available Conventional splines offer powerful means for modeling surfaces and volumes in three-dimensional Euclidean space. A one-dimensional quaternion spline has been applied for animation purpose, where the splines are defined to model a one-dimensional submanifold in the three-dimensional Lie group. Given two surfaces, all of the diffeomorphisms between them form an infinite dimensional manifold, the so-called diffeomorphism space. In this work, we propose a novel scheme to model finite dimensional submanifolds in the diffeomorphism space by generalizing conventional splines. According to quasiconformal geometry theorem, each diffeomorphism determines a Beltrami differential on the source surface. Inversely, the diffeomorphism is determined by its Beltrami differential with normalization conditions. Therefore, the diffeomorphism space has one-to-one correspondence to the space of a special differential form. The convex combination of Beltrami differentials is still a Beltrami differential. Therefore, the conventional spline scheme can be generalized to the Beltrami differential space and, consequently, to the diffeomorphism space. Our experiments demonstrate the efficiency and efficacy of diffeomorphism splines. The diffeomorphism spline has many potential applications, such as surface registration, tracking and animation.

  9. Schwarz and multilevel methods for quadratic spline collocation

    Energy Technology Data Exchange (ETDEWEB)

    Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  10. Poles tracking of weakly nonlinear structures using a Bayesian smoothing method

    Science.gov (United States)

    Stephan, Cyrille; Festjens, Hugo; Renaud, Franck; Dion, Jean-Luc

    2017-02-01

    This paper describes a method for the identification and the tracking of poles of a weakly nonlinear structure from its free responses. This method is based on a model of multichannel damped sines whose parameters evolve over time. Their variations are approximated in discrete time by a nonlinear state space model. States are estimated by an iterative process which couples a two-pass Bayesian smoother with an Expectation-Maximization (EM) algorithm. The method is applied on numerical and experimental cases. As a result, accurate frequency and damping estimates are obtained as a function of amplitude.

  11. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA methods

    Directory of Open Access Journals (Sweden)

    Owens Chantelle J

    2009-02-01

    Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.

  12. Age-specific probability of childbirth. Smoothing via bayesian nonparametric mixture of rounded kernels

    Directory of Open Access Journals (Sweden)

    Antonio Canale

    2015-03-01

    Full Text Available The municipality of Milan is one of the most important areas in Italy being the center of many economic activities and the destination of strong national and international immigration. In this context, policy makers are interested in understanding socio-demographical and economical differences among the different urban areas. In this paper we concentrate in estimating differences in fertility among the nine areas of Milan. The knowledge of age-specific fertility indicators, indeed, is extremely useful in order to decide where to build a new nursery-school, where to increase obstetrics departments in hospitals, or which kind of services can be offered to families.To estimate the age-specific probabilities of child-births in the municipality of Milan, we use open-data on the births residents in Milan in 2011. It has recently been observed that the patterns of fertility of developed countries show a deviation from the classic right-skewed shape due to the fact that women tend to have children later. Also, when a large component of immigrants is present, the age-specific fertility rate exhibits an almost bimodal shape, the curve shows a little hump between 20 and 25 years of the woman, presumably due to the presence of subpopulations. To deal with this phenomena and to compare fertility between the nine urban areas of the municipality of Milan, we apply a Bayesian nonparametric  mixture model which can account for skewness and multimodality and we estimate the age-specific probability of childbirth.

  13. Image edges detection through B-Spline filters

    International Nuclear Information System (INIS)

    Mastropiero, D.G.

    1997-01-01

    B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab

  14. Control of the strength of visual-motor transmission as the mechanism of rapid adaptation of priors for Bayesian inference in smooth pursuit eye movements.

    Science.gov (United States)

    Darlington, Timothy R; Tokiyama, Stefanie; Lisberger, Stephen G

    2017-08-01

    Bayesian inference provides a cogent account of how the brain combines sensory information with "priors" based on past experience to guide many behaviors, including smooth pursuit eye movements. We now demonstrate very rapid adaptation of the pursuit system's priors for target direction and speed. We go on to leverage that adaptation to outline possible neural mechanisms that could cause pursuit to show features consistent with Bayesian inference. Adaptation of the prior causes changes in the eye speed and direction at the initiation of pursuit. The adaptation appears after a single trial and accumulates over repeated exposure to a given history of target speeds and directions. The influence of the priors depends on the reliability of visual motion signals: priors are more effective against the visual motion signals provided by low-contrast vs. high-contrast targets. Adaptation of the direction prior generalizes to eye speed and vice versa, suggesting that both priors could be controlled by a single neural mechanism. We conclude that the pursuit system can learn the statistics of visual motion rapidly and use those statistics to guide future behavior. Furthermore, a model that adjusts the gain of visual-motor transmission predicts the effects of recent experience on pursuit direction and speed, as well as the specifics of the generalization between the priors for speed and direction. We suggest that Bayesian inference in pursuit behavior is implemented by distinctly non-Bayesian internal mechanisms that use the smooth eye movement region of the frontal eye fields to control of the gain of visual-motor transmission. NEW & NOTEWORTHY Bayesian inference can account for the interaction between sensory data and past experience in many behaviors. Here, we show, using smooth pursuit eye movements, that the priors based on past experience can be adapted over a very short time frame. We also show that a single model based on direction-specific adaptation of the strength of

  15. LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Dokken

    2015-08-01

    Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.

  16. Color management with a hammer: the B-spline fitter

    Science.gov (United States)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  17. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  18. Straight-sided Spline Optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    Spline connection of shaft and hub is commonly applied when large torque capacity is needed together with the possibility of disassembly. The designs of these splines are generally controlled by different standards. In view of the common use of splines, it seems that few papers deal with splines ...

  19. Smooth GERBS, orthogonal systems and energy minimization

    Energy Technology Data Exchange (ETDEWEB)

    Dechevsky, Lubomir T., E-mail: ltd@hin.no, E-mail: pza@hin.no; Zanaty, Peter, E-mail: ltd@hin.no, E-mail: pza@hin.no [Faculty of Technology, Narvik University College, 2 Lodve Lange' s St., P.O.Box 385, Narvik N-8505 (Norway)

    2013-12-18

    New results are obtained in three mutually related directions of the rapidly developing theory of generalized expo-rational B-splines (GERBS) [7, 6]: closed-form computability of C{sup ∞}-smooth GERBS in terms of elementary and special functions, Hermite interpolation and least-squares best approximation via smooth GERBS, energy minimizing properties of smooth GERBS similar to those of the classical cubic polynomial B-splines.

  20. Designing interactively with elastic splines

    DEFF Research Database (Denmark)

    Brander, David; Bærentzen, Jakob Andreas; Fisker, Ann-Sofie

    2018-01-01

    We present an algorithm for designing interactively with C1 elastic splines. The idea is to design the elastic spline using a C1 cubic polynomial spline where each polynomial segment is so close to satisfying the Euler-Lagrange equation for elastic curves that the visual difference becomes neglig...... negligible. Using a database of cubic Bézier curves we are able to interactively modify the cubic spline such that it remains visually close to an elastic spline....

  1. BS Methods: A New Class of Spline Collocation BVMs

    Science.gov (United States)

    Mazzia, Francesca; Sestini, Alessandra; Trigiante, Donato

    2008-09-01

    BS methods are a recently introduced class of Boundary Value Methods which is based on B-splines. They can also be interpreted as spline collocation methods. For uniform meshes, the coefficients defining the k-step BS method are just the values of the (k+1)-degree uniform B-spline and B-spline derivative at its integer active knots; for general nonuniform meshes they are computed by solving local linear systems whose dimension depends on k. An important specific feature of BS methods is the possibility to associate a spline of degree k+1 and smoothness Ck to the numerical solution produced by the k-step method of this class. Such spline collocates the differential equation at the knots, shares the convergence order with the numerical solution, and can be computed with negligible additional computational cost. Here a survey on such methods is given, presenting the general definition, the convergence and stability features, and introducing the strategy for the computation of the coefficients in the B-spline basis which define the associated spline. Finally, some related numerical results are also presented.

  2. Non polynomial B-splines

    Science.gov (United States)

    Laksâ, Arne

    2015-11-01

    B-splines are the de facto industrial standard for surface modelling in Computer Aided design. It is comparable to bend flexible rods of wood or metal. A flexible rod minimize the energy when bending, a third degree polynomial spline curve minimize the second derivatives. B-spline is a nice way of representing polynomial splines, it connect polynomial splines to corner cutting techniques, which induces many nice and useful properties. However, the B-spline representation can be expanded to something we can call general B-splines, i.e. both polynomial and non-polynomial splines. We will show how this expansion can be done, and the properties it induces, and examples of non-polynomial B-spline.

  3. Interpolation of natural cubic spline

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    1992-01-01

    Full Text Available From the result in [1] it follows that there is a unique quadratic spline which bounds the same area as that of the function. The matching of the area for the cubic spline does not follow from the corresponding result proved in [2]. We obtain cubic splines which preserve the area of the function.

  4. SPLPKG WFCMPR WFAPPX, Wilson-Fowler Spline Generator for Computer Aided Design And Manufacturing (CAD/CAM) Systems

    International Nuclear Information System (INIS)

    Fletcher, S.K.

    2002-01-01

    1 - Description of program or function: The three programs SPLPKG, WFCMPR, and WFAPPX provide the capability for interactively generating, comparing and approximating Wilson-Fowler Splines. The Wilson-Fowler spline is widely used in Computer Aided Design and Manufacturing (CAD/CAM) systems. It is favored for many applications because it produces a smooth, low curvature fit to planar data points. Program SPLPKG generates a Wilson-Fowler spline passing through given nodes (with given end conditions) and also generates a piecewise linear approximation to that spline within a user-defined tolerance. The program may be used to generate a 'desired' spline against which to compare other Splines generated by CAD/CAM systems. It may also be used to generate an acceptable approximation to a desired spline in the event that an acceptable spline cannot be generated by the receiving CAD/CAM system. SPLPKG writes an IGES file of points evaluated on the spline and/or a file containing the spline description. Program WFCMPR computes the maximum difference between two Wilson-Fowler Splines and may be used to verify the spline recomputed by a receiving system. It compares two Wilson-Fowler Splines with common nodes and reports the maximum distance between curves (measured perpendicular to segments) and the maximum difference of their tangents (or normals), both computed along the entire length of the Splines. Program WFAPPX computes the maximum difference between a Wilson- Fowler spline and a piecewise linear curve. It may be used to accept or reject a proposed approximation to a desired Wilson-Fowler spline, even if the origin of the approximation is unknown. The maximum deviation between these two curves, and the parameter value on the spline where it occurs are reported. 2 - Restrictions on the complexity of the problem - Maxima of: 1600 evaluation points (SPLPKG), 1000 evaluation points (WFAPPX), 1000 linear curve breakpoints (WFAPPX), 100 spline Nodes

  5. On Characterization of Quadratic Splines

    DEFF Research Database (Denmark)

    Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong

    2005-01-01

    A quadratic spline is a differentiable piecewise quadratic function. Many problems in numerical analysis and optimization literature can be reformulated as unconstrained minimizations of quadratic splines. However, only special cases of quadratic splines are studied in the existing literature...... between the convexity of a quadratic spline function and the monotonicity of the corresponding LCP problem. It is shown that, although both conditions lead to easy solvability of the problem, they are different in general......., and algorithms are developed on a case by case basis. There lacks an analytical representation of a general or even a convex quadratic spline. The current paper fills this gap by providing an analytical representation of a general quadratic spline. Furthermore, for convex quadratic spline, it is shown...

  6. Connecting the Dots Parametrically: An Alternative to Cubic Splines.

    Science.gov (United States)

    Hildebrand, Wilbur J.

    1990-01-01

    Discusses a method of cubic splines to determine a curve through a series of points and a second method for obtaining parametric equations for a smooth curve that passes through a sequence of points. Procedures for determining the curves and results of each of the methods are compared. (YP)

  7. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  8. Control theoretic splines optimal control, statistical, and path planning

    CERN Document Server

    Egerstedt, Magnus

    2010-01-01

    Splines, both interpolatory and smoothing, have a long and rich history that has largely been application driven. This book unifies these constructions in a comprehensive and accessible way, drawing from the latest methods and applications to show how they arise naturally in the theory of linear control systems. Magnus Egerstedt and Clyde Martin are leading innovators in the use of control theoretic splines to bring together many diverse applications within a common framework. In this book, they begin with a series of problems ranging from path planning to statistics to approximation.

  9. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  10. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  11. The EH Interpolation Spline and Its Approximation

    Directory of Open Access Journals (Sweden)

    Jin Xie

    2014-01-01

    Full Text Available A new interpolation spline with two parameters, called EH interpolation spline, is presented in this paper, which is the extension of the standard cubic Hermite interpolation spline, and inherits the same properties of the standard cubic Hermite interpolation spline. Given the fixed interpolation conditions, the shape of the proposed splines can be adjusted by changing the values of the parameters. Also, the introduced spline could approximate to the interpolated function better than the standard cubic Hermite interpolation spline and the quartic Hermite interpolation splines with single parameter by a new algorithm.

  12. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  13. Modelling Childhood Growth Using Fractional Polynomials and Linear Splines

    Science.gov (United States)

    Tilling, Kate; Macdonald-Wallis, Corrie; Lawlor, Debbie A.; Hughes, Rachael A.; Howe, Laura D.

    2014-01-01

    Background There is increasing emphasis in medical research on modelling growth across the life course and identifying factors associated with growth. Here, we demonstrate multilevel models for childhood growth either as a smooth function (using fractional polynomials) or a set of connected linear phases (using linear splines). Methods We related parental social class to height from birth to 10 years of age in 5,588 girls from the Avon Longitudinal Study of Parents and Children (ALSPAC). Multilevel fractional polynomial modelling identified the best-fitting model as being of degree 2 with powers of the square root of age, and the square root of age multiplied by the log of age. The multilevel linear spline model identified knot points at 3, 12 and 36 months of age. Results Both the fractional polynomial and linear spline models show an initially fast rate of growth, which slowed over time. Both models also showed that there was a disparity in length between manual and non-manual social class infants at birth, which decreased in magnitude until approximately 1 year of age and then increased. Conclusions Multilevel fractional polynomials give a more realistic smooth function, and linear spline models are easily interpretable. Each can be used to summarise individual growth trajectories and their relationships with individual-level exposures. PMID:25413651

  14. Optimization of straight-sided spline design

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    Spline connection of shaft and hub is commonly applied when large torque capacity is needed together with the possibility of disassembly. The designs of these splines are generally controlled by different standards. In view of the common use of splines, it seems that few papers deal with splines ...

  15. Weighted cubic and biharmonic splines

    Science.gov (United States)

    Kvasov, Boris; Kim, Tae-Wan

    2017-01-01

    In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.

  16. A Blossoming Development of Splines

    CERN Document Server

    Mann, Stephen

    2006-01-01

    In this lecture, we study Bezier and B-spline curves and surfaces, mathematical representations for free-form curves and surfaces that are common in CAD systems and are used to design aircraft and automobiles, as well as in modeling packages used by the computer animation industry. Bezier/B-splines represent polynomials and piecewise polynomials in a geometric manner using sets of control points that define the shape of the surface. The primary analysis tool used in this lecture is blossoming, which gives an elegant labeling of the control points that allows us to analyze their properties geom

  17. Symmetric, discrete fractional splines and Gabor systems

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2006-01-01

    In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the continu......In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing...... the continuous splines, and one is a truly finite, discrete construction. We discuss the properties of these splines and their usefulness as windows for Gabor frames and Wilson bases....

  18. Numerical Methods Using B-Splines

    Science.gov (United States)

    Shariff, Karim; Merriam, Marshal (Technical Monitor)

    1997-01-01

    The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.

  19. Isogeometric analysis using T-splines

    KAUST Repository

    Bazilevs, Yuri

    2010-01-01

    We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.

  20. Cubic spline functions for curve fitting

    Science.gov (United States)

    Young, J. D.

    1972-01-01

    FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.

  1. Density Deconvolution With EPI Splines

    Science.gov (United States)

    2015-09-01

    Comparison of Deconvolution Methods . . . . . . . . . . . . . . . 28 5 High-Fidelity and Low-Fidelity Simulation Output 31 5.1 Hydrofoil Concept...46 A.3 Hydrofoil Concept . . . . . . . . . . . . . . . . . . . . . . . . 47 A.4 Notes on Computation Time...Epi-Spline Estimates . . . . . . . . . . . 28 Figure 4.3 Deconvolution Method Comparison . . . . . . . . . . . . . . . . 29 Figure 5.1 Hydrofoil

  2. Data reduction using cubic rational B-splines

    Science.gov (United States)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  3. Monotonicity preserving splines using rational cubic Timmer interpolation

    Science.gov (United States)

    Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md

    2017-08-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.

  4. Spline and spline wavelet methods with applications to signal and image processing

    CERN Document Server

    Averbuch, Amir Z; Zheludev, Valery A

    This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...

  5. A comparison of several practical smoothing methods applied to Auger electron energy distributions and line scans

    International Nuclear Information System (INIS)

    Yu, K.S.; Prutton, M.; Larson, L.A.; Pate, B.B.; Poppa, H.

    1982-01-01

    Data-fitting routines utilizing nine-point least-squares quadratic, stiff spline, and piecewise least-squares polynomial methods have been compared on noisy Auger spectra and line scans. The spline-smoothing technique has been found to be the most useful and practical, allowing information to be extracted with excellent integrity from model Auger data having close to unity signal-to-noise ratios. Automatic determination of stiffness parameters is described. A comparison of the relative successes of these smoothing methods, using artificial data, is given. Applications of spline smoothing are presented to illustrate its effectiveness for difference spectra and for noisy Auger line scans. (orig.)

  6. Approximation and geomatric modeling with simplex B-splines associates with irregular triangular

    NARCIS (Netherlands)

    Auerbach, S.; Gmelig Meyling, R.H.J.; Neamtu, M.; Neamtu, M.; Schaeben, H.

    1991-01-01

    Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the

  7. Practical box splines for reconstruction on the body centered cubic lattice.

    Science.gov (United States)

    Entezari, Alireza; Van De Ville, Dimitri; Möeller, Torsten

    2008-01-01

    We introduce a family of box splines for efficient, accurate and smooth reconstruction of volumetric data sampled on the Body Centered Cubic (BCC) lattice, which is the favorable volumetric sampling pattern due to its optimal spectral sphere packing property. First, we construct a box spline based on the four principal directions of the BCC lattice that allows for a linear C(0) reconstruction. Then, the design is extended for higher degrees of continuity. We derive the explicit piecewise polynomial representation of the C(0) and C(2) box splines that are useful for practical reconstruction applications. We further demonstrate that approximation in the shift-invariant space---generated by BCC-lattice shifts of these box splines---is {twice} as efficient as using the tensor-product B-spline solutions on the Cartesian lattice (with comparable smoothness and approximation order, and with the same sampling density). Practical evidence is provided demonstrating that not only the BCC lattice is generally a more accurate sampling pattern, but also allows for extremely efficient reconstructions that outperform tensor-product Cartesian reconstructions.

  8. A new relational method for smoothing and projecting age specific fertility rates: TOPALS

    NARCIS (Netherlands)

    de Beer, J.A.A.

    2011-01-01

    Age-specific fertility rates can be smoothed using parametric models or splines. Alternatively a relational model can be used which relates the age profile to be fitted or projected to a standard age schedule. This paper introduces TOPALS (tool for projecting age patterns using linear splines), a

  9. Quadrotor system identification using the multivariate multiplex b-spline

    NARCIS (Netherlands)

    Visser, T.; De Visser, C.C.; Van Kampen, E.J.

    2015-01-01

    A novel method for aircraft system identification is presented that is based on a new multivariate spline type; the multivariate multiplex B-spline. The multivariate multiplex B-spline is a generalization of the recently introduced tensor-simplex B-spline. Multivariate multiplex splines obtain

  10. Construction of local integro quintic splines

    Directory of Open Access Journals (Sweden)

    T. Zhanlav

    2016-06-01

    Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.

  11. Bayesian inference of local geomagnetic secular variation curves: application to archaeomagnetism

    Science.gov (United States)

    Lanos, Philippe

    2014-05-01

    The errors that occur at different stages of the archaeomagnetic calibration process are combined using a Bayesian hierarchical modelling. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are generally more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we propose allows all these observations and errors to be linked together thanks to appropriate prior probability densities. The model also includes penalized cubic splines for estimating the univariate, spherical or three-dimensional curves for the secular variation of the geomagnetic field (inclination, declination, intensity) over time at a local place. The mean smooth curve we obtain, with its posterior Bayesian envelop provides an adaptation to the effects of variability in the density of reference points over time. Moreover, the hierarchical modelling also allows an efficient way to penalize outliers automatically. With this new posterior estimate of the curve, the Bayesian statistical framework then allows to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in the same way as those that arise from radiocarbon dating. In order to illustrate the model and the inference method used, we will present results based on French, Bulgarian and Austrian datasets recently published.

  12. Spline methods for conversation equations

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs

  13. quadratic spline finite element method

    Directory of Open Access Journals (Sweden)

    A. R. Bahadir

    2002-01-01

    Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.

  14. Trajectory control of an articulated robot with a parallel drive arm based on splines under tension

    Science.gov (United States)

    Yi, Seung-Jong

    Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and

  15. Univariate Cubic L1 Interpolating Splines: Spline Functional, Window Size and Analysis-based Algorithm

    Directory of Open Access Journals (Sweden)

    Shu-Cherng Fang

    2010-08-01

    Full Text Available We compare univariate L1 interpolating splines calculated on 5-point windows, on 7-point windows and on global data sets using four different spline functionals, namely, ones based on the second derivative, the first derivative, the function value and the antiderivative. Computational results indicate that second-derivative-based 5-point-window L1 splines preserve shape as well as or better than the other types of L1 splines. To calculate second-derivative-based 5-point-window L1 splines, we introduce an analysis-based, parallelizable algorithm. This algorithm is orders of magnitude faster than the previously widely used primal affine algorithm.

  16. Spline fitting for multi-set data

    International Nuclear Information System (INIS)

    Zhou Hongmo; Liu Renqiu; Liu Tingjin

    1987-01-01

    A spline fit method and program for multi-set data have been developed. Improvements have been made to have new functions: any order of spline as base, knot optimization and accurate calculation for error of fit value. The program has been used for practical evaluation of nuclear data

  17. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan

    2010-09-28

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  18. Bayesian modeling using WinBUGS

    CERN Document Server

    Ntzoufras, Ioannis

    2009-01-01

    A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...

  19. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    Science.gov (United States)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  20. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  1. Optimal Approximation of Biquartic Polynomials by Bicubic Splines

    Directory of Open Access Journals (Sweden)

    Kačala Viliam

    2018-01-01

    The goal of this paper is to resolve this problem. Unlike the spline curves, in the case of spline surfaces it is insufficient to suppose that the grid should be uniform and the spline derivatives computed from a biquartic polynomial. We show that the biquartic polynomial coefficients have to satisfy some additional constraints to achieve optimal approximation by bicubic splines.

  2. Spline interpolations besides wood model widely used in lactation

    Science.gov (United States)

    Korkmaz, Mehmet

    2017-04-01

    In this study, for lactation curve, spline interpolations, alternative modeling passing through exactly all data points with respect to widely used Wood model applied to lactation data were be discussed. These models are linear spline, quadratic spline and cubic spline. The observed and estimated values according to spline interpolations and Wood model were given with their Error Sum of Squares and also the lactation curves of spline interpolations and widely used Wood model were shown on the same graph. Thus, the differences have been observed. The estimates for some intermediate values were done by using spline interpolations and Wood model. By using spline interpolations, the estimates of intermediate values could be made more precise. Furthermore, by using spline interpolations, the predicted values for missing or incorrect observation were very successful according to the values of Wood model. By using spline interpolations, new ideas and interpretations in addition to the information of the well-known classical analysis were shown to the investigators.

  3. Biomechanical Analysis with Cubic Spline Functions

    Science.gov (United States)

    McLaughlin, Thomas M.; And Others

    1977-01-01

    Results of experimentation suggest that the cubic spline is a convenient and consistent method for providing an accurate description of displacement-time data and for obtaining the corresponding time derivatives. (MJB)

  4. On convexity and Schoenberg's variation diminishing splines

    International Nuclear Information System (INIS)

    Feng, Yuyu; Kozak, J.

    1992-11-01

    In the paper we characterize a convex function by the monotonicity of a particular variation diminishing spline sequence. The result extends the property known for the Bernstein polynomial sequence. (author). 4 refs

  5. Spline Variational Theory for Composite Bolted Joints

    National Research Council Canada - National Science Library

    Iarve, E

    1997-01-01

    .... Two approaches were implemented. A conventional mesh overlay method in the crack region to satisfy the crack face boundary conditions and a novel spline basis partitioning method were compared...

  6. Flexible regression models with cubic splines.

    Science.gov (United States)

    Durrleman, S; Simon, R

    1989-05-01

    We describe the use of cubic splines in regression models to represent the relationship between the response variable and a vector of covariates. This simple method can help prevent the problems that result from inappropriate linearity assumptions. We compare restricted cubic spline regression to non-parametric procedures for characterizing the relationship between age and survival in the Stanford Heart Transplant data. We also provide an illustrative example in cancer therapeutics.

  7. Smooth Phase Interpolated Keying

    Science.gov (United States)

    Borah, Deva K.

    2007-01-01

    that result in equal numbers of clockwise and counter-clockwise phase rotations for equally likely symbols. The purpose served by assigning phase values in this way is to prevent unnecessary generation of spectral lines and prevent net shifts of the carrier signal. In the phase-interpolation step, the smooth phase values are interpolated over a number, n, of consecutive symbols (including the present symbol) by means of an unconventional spline curve fit.

  8. Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2008-05-15

    An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.

  9. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.

    2010-01-01

    Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.

  10. An enhanced splined saddle method

    Science.gov (United States)

    Ghasemi, S. Alireza; Goedecker, Stefan

    2011-07-01

    We present modifications for the method recently developed by Granot and Baer [J. Chem. Phys. 128, 184111 (2008)], 10.1063/1.2916716. These modifications significantly enhance the efficiency and reliability of the method. In addition, we discuss some specific features of this method. These features provide important flexibilities which are crucial for a double-ended saddle point search method in order to be applicable to complex reaction mechanisms. Furthermore, it is discussed under what circumstances this methods might fail to find the transition state and remedies to avoid such situations are provided. We demonstrate the performance of the enhanced splined saddle method on several examples with increasing complexity, isomerization of ammonia, ethane and cyclopropane molecules, tautomerization of cytosine, the ring opening of cyclobutene, the Stone-Wales transformation of the C60 fullerene, and finally rolling a small NaCl cube on NaCl(001) surface. All of these calculations are based on density functional theory. The efficiency of the method is remarkable in regard to the reduction of the total computational time.

  11. Placing Spline Knots in Neural Networks Using Splines as Activation Functions

    Czech Academy of Sciences Publication Activity Database

    Hlaváčková, Kateřina; Verleysen, M.

    1997-01-01

    Roč. 17, 3/4 (1997), s. 159-166 ISSN 0925-2312 R&D Projects: GA ČR GA201/93/0427; GA ČR GA201/96/0971 Keywords : cubic -spline function * approximation error * knots of spline function * feedforward neural network Impact factor: 0.422, year: 1997

  12. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    Science.gov (United States)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  13. Scripted Bodies and Spline Driven Animation

    DEFF Research Database (Denmark)

    Erleben, Kenny; Henriksen, Knud

    2002-01-01

    In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain of the sp......In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...

  14. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  15. Smooth approximations

    Czech Academy of Sciences Publication Activity Database

    Hájek, Petr Pavel; Johanis, M.

    2010-01-01

    Roč. 259, č. 3 (2010), s. 561-582 ISSN 0022-1236 R&D Projects: GA AV ČR IAA100190801 Institutional research plan: CEZ:AV0Z10190503 Keywords : C-P-smooth * Banach spaces * Lipschitz Subject RIV: BA - General Mathematics Impact factor: 1.196, year: 2010 http://www.sciencedirect.com/science/article/pii/S0022123610001795

  16. Limit Stress Spline Models for GRP Composites | Ihueze | Nigerian ...

    African Journals Online (AJOL)

    Spline functions were established on the assumption of three intervals and fitting of quadratic and cubic splines to critical stress-strain responses data. Quadratic ... of data points. Spline model is therefore recommended as it evaluates the function at subintervals, eliminating the error associated with wide range interpolation.

  17. The use of splines to analyze scanning tunneling microscopy data

    NARCIS (Netherlands)

    Wormeester, Herbert; Kip, Gerhardus A.M.; Sasse, A.G.B.M.; van Midden, H.J.P.

    1990-01-01

    Scanning tunneling microscopy (STM) requires a two‐dimensional (2D) image displaying technique for its interpretation. The flexibility and global approximation properties of splines, characteristic of a solid data reduction method as known from cubic spline interpolation, is called for. Splines were

  18. LIMIT STRESS SPLINE MODELS FOR GRP COMPOSITES

    African Journals Online (AJOL)

    ES OBE

    Department of Mechanical Engineering, Anambra State. University of Science and Technology, Uli ... 12 were established. The optimization of quadratic and cubic models by gradient search optimization gave the critical strain as 0.024, .... 2.2.1 Derivation of Cubic Spline Equation. The basic assumptions to be used are: 1.

  19. Weighted thin-plate spline image denoising

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Roman; Zitová, Barbara

    2003-01-01

    Roč. 36, č. 12 (2003), s. 3027-3030 ISSN 0031-3203 R&D Projects: GA ČR GP102/01/P065 Institutional research plan: CEZ:AV0Z1075907 Keywords : image denoising * thin-plate splines Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.611, year: 2003

  20. Survival estimation through the cumulative hazard function with monotone natural cubic splines.

    Science.gov (United States)

    Bantis, Leonidas E; Tsimikas, John V; Georgiou, Stelios D

    2012-07-01

    In this paper we explore the estimation of survival probabilities via a smoothed version of the survival function, in the presence of censoring. We investigate the fit of a natural cubic spline on the cumulative hazard function under appropriate constraints. Under the proposed technique the problem reduces to a restricted least squares one, leading to convex optimization. The approach taken in this paper is evaluated and compared via simulations to other known methods such as the Kaplan Meier and the logspline estimator. Our approach is easily extended to address estimation of survival probabilities in the presence of covariates when the proportional hazards model assumption holds. In this case the method is compared to a restricted cubic spline approach that involves maximum likelihood. The proposed approach can be also adjusted to accommodate left censoring.

  1. Selected Aspects of Wear Affecting Keyed Joints and Spline Connections During Operation of Aircrafts

    Directory of Open Access Journals (Sweden)

    Gębura Andrzej

    2014-12-01

    Full Text Available The paper deals with selected deficiencies of spline connections, such as angular or parallel misalignment (eccentricity and excessive play. It is emphasized how important these deficiencies are for smooth operation of the entire driving units. The aim of the study is to provide a kind of a reference list with such deficiencies with visual symptoms of wear, specification of mechanical measurements for mating surfaces, mathematical description of waveforms for dynamic variability of motion in such connections and visualizations of the connection behaviour acquired with the use of the FAM-C and FDM-A. Attention is paid to hazards to flight safety when excessively worn spline connections are operated for long periods of time

  2. Modeling of type-2 fuzzy cubic B-spline surface for flood data problem in Malaysia

    Science.gov (United States)

    Bidin, Mohd Syafiq; Wahab, Abd. Fatah

    2017-08-01

    Malaysia possesses a low and sloping land areas which may cause flood. The flood phenomenon can be analyzed if the surface data of the study area can be modeled by geometric modeling. Type-2 fuzzy data for the flood data is defined using type-2 fuzzy set theory in order to solve the uncertainty of complex data. Then, cubic B-spline surface function is used to produce a smooth surface. Three main processes are carried out to find a solution to crisp type-2 fuzzy data which is fuzzification (α-cut operation), type-reduction and defuzzification. Upon conducting these processes, Type-2 Fuzzy Cubic B-Spline Surface Model is applied to visualize the surface data of the flood areas that are complex uncertainty.

  3. Bayesian spatial semi-parametric modeling of HIV variation in Kenya.

    Directory of Open Access Journals (Sweden)

    Oscar Ngesa

    Full Text Available Spatial statistics has seen rapid application in many fields, especially epidemiology and public health. Many studies, nonetheless, make limited use of the geographical location information and also usually assume that the covariates, which are related to the response variable, have linear effects. We develop a Bayesian semi-parametric regression model for HIV prevalence data. Model estimation and inference is based on fully Bayesian approach via Markov Chain Monte Carlo (McMC. The model is applied to HIV prevalence data among men in Kenya, derived from the Kenya AIDS indicator survey, with n = 3,662. Past studies have concluded that HIV infection has a nonlinear association with age. In this study a smooth function based on penalized regression splines is used to estimate this nonlinear effect. Other covariates were assumed to have a linear effect. Spatial references to the counties were modeled as both structured and unstructured spatial effects. We observe that circumcision reduces the risk of HIV infection. The results also indicate that men in the urban areas were more likely to be infected by HIV as compared to their rural counterpart. Men with higher education had the lowest risk of HIV infection. A nonlinear relationship between HIV infection and age was established. Risk of HIV infection increases with age up to the age of 40 then declines with increase in age. Men who had STI in the last 12 months were more likely to be infected with HIV. Also men who had ever used a condom were found to have higher likelihood to be infected by HIV. A significant spatial variation of HIV infection in Kenya was also established. The study shows the practicality and flexibility of Bayesian semi-parametric regression model in analyzing epidemiological data.

  4. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz

  5. Satellite Video Point-target Tracking in Combination with Motion Smoothness Constraint and Grayscale Feature

    OpenAIRE

    WU Jiaqi; ZHANG Guo; WANG Taoyang; JIANG Yonghua

    2017-01-01

    In view of the problem of satellite video point-target tracking, a method of Bayesian classification for tracking with the constraint of motion smoothness is proposed, which named Bayesian MoST. The idea of naive Bayesian classification without relying on any prior probability of target is introduced. Under the constraint of motion smoothness, the gray level similarity feature is used to describe the likelihood of the target. And then, the simplified conditional probability correction model o...

  6. Spline models of contemporary, 2030, 2060, and 2090 climates for Mexico and their use in understanding climate-change impacts on the vegetation

    Science.gov (United States)

    Cuauhtemoc Saenz-Romero; Gerald E. Rehfeldt; Nicholas L. Crookston; Pierre Duval; Remi St-Amant; Jean Beaulieu; Bryce A. Richardson

    2010-01-01

    Spatial climate models were developed for Mexico and its periphery (southern USA, Cuba, Belize and Guatemala) for monthly normals (1961-1990) of average, maximum and minimum temperature and precipitation using thin plate smoothing splines of ANUSPLIN software on ca. 3,800 observations. The fit of the model was generally good: the signal was considerably less than one-...

  7. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.

    2010-08-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  8. Application of multivariate splines to discrete mathematics

    OpenAIRE

    Xu, Zhiqiang

    2005-01-01

    Using methods developed in multivariate splines, we present an explicit formula for discrete truncated powers, which are defined as the number of non-negative integer solutions of linear Diophantine equations. We further use the formula to study some classical problems in discrete mathematics as follows. First, we extend the partition function of integers in number theory. Second, we exploit the relation between the relative volume of convex polytopes and multivariate truncated powers and giv...

  9. A quadratic spline maximum entropy method for the computation of invariant densities

    Directory of Open Access Journals (Sweden)

    DING Jiu

    2015-06-01

    Full Text Available The numerical recovery of an invariant density of the Frobenius-Perron operator corresponding to a nonsingular transformation is depicted by using quadratic spline functions. We implement a maximum entropy method to approximate the invariant density. The proposed method removes the ill-conditioning in the maximum entropy method, which arises by the use of polynomials. Due to the smoothness of the functions and a good convergence rate, the accuracy in the numerical calculation increases rapidly as the number of moment functions increases. The numerical results from the proposed method are supported by the theoretical analysis.

  10. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    International Nuclear Information System (INIS)

    Crecy, F. de.

    1993-01-01

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs

  11. Solving Buckmaster equation using cubic B-spline and cubic trigonometric B-spline collocation methods

    Science.gov (United States)

    Chanthrasuwan, Maveeka; Asri, Nur Asreenawaty Mohd; Hamid, Nur Nadiah Abd; Majid, Ahmad Abd.; Azmi, Amirah

    2017-08-01

    The cubic B-spline and cubic trigonometric B-spline functions are used to set up the collocation in finding solutions for the Buckmaster equation. These splines are applied as interpolating functions in the spatial dimension while the finite difference method (FDM) is used to discretize the time derivative. The Buckmaster equation is linearized using Taylor's expansion and solved using two schemes, namely Crank-Nicolson and fully implicit. The von Neumann stability analysis is carried out on the two schemes and they are shown to be conditionally stable. In order to demonstrate the capability of the schemes, some problems are solved and compared with analytical and FDM solutions. The proposed methods are found to generate more accurate results than the FDM.

  12. PSPLINE: Princeton Spline and Hermite cubic interpolation routines

    Science.gov (United States)

    McCune, Doug

    2017-10-01

    PSPLINE is a collection of Spline and Hermite interpolation tools for 1D, 2D, and 3D datasets on rectilinear grids. Spline routines give full control over boundary conditions, including periodic, 1st or 2nd derivative match, or divided difference-based boundary conditions on either end of each grid dimension. Hermite routines take the function value and derivatives at each grid point as input, giving back a representation of the function between grid points. Routines are provided for creating Hermite datasets, with appropriate boundary conditions applied. The 1D spline and Hermite routines are based on standard methods; the 2D and 3D spline or Hermite interpolation functions are constructed from 1D spline or Hermite interpolation functions in a straightforward manner. Spline and Hermite interpolation functions are often much faster to evaluate than other representations using e.g. Fourier series or otherwise involving transcendental functions.

  13. Higher-order numerical solutions using cubic splines

    Science.gov (United States)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.

  14. Optimal Approximation of Biquartic Polynomials by Bicubic Splines

    Science.gov (United States)

    Kačala, Viliam; Török, Csaba

    2018-02-01

    Recently an unexpected approximation property between polynomials of degree three and four was revealed within the framework of two-part approximation models in 2-norm, Chebyshev norm and Holladay seminorm. Namely, it was proved that if a two-component cubic Hermite spline's first derivative at the shared knot is computed from the first derivative of a quartic polynomial, then the spline is a clamped spline of class C2 and also the best approximant to the polynomial. Although it was known that a 2 × 2 component uniform bicubic Hermite spline is a clamped spline of class C2 if the derivatives at the shared knots are given by the first derivatives of a biquartic polynomial, the optimality of such approximation remained an open question. The goal of this paper is to resolve this problem. Unlike the spline curves, in the case of spline surfaces it is insufficient to suppose that the grid should be uniform and the spline derivatives computed from a biquartic polynomial. We show that the biquartic polynomial coefficients have to satisfy some additional constraints to achieve optimal approximation by bicubic splines.

  15. Recursive B-spline approximation using the Kalman filter

    Directory of Open Access Journals (Sweden)

    Jens Jauch

    2017-02-01

    Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.

  16. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    Science.gov (United States)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  17. Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines

    KAUST Repository

    Barton, Michael

    2015-10-24

    We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.

  18. MATLAB programs for smoothing X-ray spectra; Programy w jezyku MATLAB do wygladzania widm promieniowania X

    Energy Technology Data Exchange (ETDEWEB)

    Antoniak, W.

    1997-12-31

    Two MATLAB 4.0 programs for smoothing X-ray spectra: wekskl.m - using polynomial regression splines and wekfft.m - using the fast Fourier transform are presented. The wekskl.m accomplishes smoothing for optimal distances between the knots, whereas the wekff.m uses an optimal spectral window width. The smoothed spectra are available in the form of vectors and are presented in a graphical form as well. (author). 9 refs, 12 figs.

  19. Input point distribution for regular stem form spline modeling

    Directory of Open Access Journals (Sweden)

    Karel Kuželka

    2015-04-01

    Full Text Available Aim of study: To optimize an interpolation method and distribution of measured diameters to represent regular stem form of coniferous trees using a set of discrete points. Area of study: Central-Bohemian highlands, Czech Republic; a region that represents average stand conditions of production forests of Norway spruce (Picea abies [L.] Karst. in central Europe Material and methods: The accuracy of stem curves modeled using natural cubic splines from a set of measured diameters was evaluated for 85 closely measured stems of Norway spruce using five statistical indicators and compared to the accuracy of three additional models based on different spline types selected for their ability to represent stem curves. The optimal positions to measure diameters were identified using an aggregate objective function approach. Main results: The optimal positions of the input points vary depending on the properties of each spline type. If the optimal input points for each spline are used, then all spline types are able to give reasonable results with higher numbers of input points. The commonly used natural cubic spline was outperformed by other spline types. The lowest errors occur by interpolating the points using the Catmull-Rom spline, which gives accurate and unbiased volume estimates, even with only five input points. Research highlights: The study contributes to more accurate representation of stem form and therefore more accurate estimation of stem volume using data obtained from terrestrial imagery or other close-range remote sensing methods.

  20. About some properties of bivariate splines with shape parameters

    Science.gov (United States)

    Caliò, F.; Marchetti, E.

    2017-07-01

    The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.

  1. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  2. Error bounds for two even degree tridiagonal splines

    Directory of Open Access Journals (Sweden)

    Gary W. Howell

    1990-01-01

    Full Text Available We study a C(1 parabolic and a C(2 quartic spline which are determined by solution of a tridiagonal matrix and which interpolate subinterval midpoints. In contrast to the cubic C(2 spline, both of these algorithms converge to any continuous function as the length of the largest subinterval goes to zero, regardless of “mesh ratios”. For parabolic splines, this convergence property was discovered by Marsden [1974]. The quartic spline introduced here achieves this convergence by choosing the second derivative zero at the breakpoints. Many of Marsden's bounds are substantially tightened here. We show that for functions of two or fewer coninuous derivatives the quartic spline is shown to give yet better bounds. Several of the bounds given here are optimal.

  3. The smoothing and fast Fourier transformation of experimental X-ray and neutron data from amorphous materials

    International Nuclear Information System (INIS)

    Dixon, M.; Wright, A.C.; Hutchinson, P.

    1977-01-01

    The application of fast Fourier transformation techniques to the analysis of experimental X-ray and neutron diffraction patterns from amorphous materials is discussed and compared with conventional techniques using Filon's quadrature. The fast Fourier transform package described also includes cubic spline smoothing and has been extensively tested, using model data to which statistical errors have been added by means of a pseudo-random number generator with Gaussian shaper. Neither cubic spline nor hand smoothing has much effect on the resulting transform since the noise removed is of too high a frequency. (Auth.)

  4. Solving Dym equation using quartic B-spline and quartic trigonometric B-spline collocation methods

    Science.gov (United States)

    Anuar, Hanis Safirah Saiful; Mafazi, Nur Hidayah; Hamid, Nur Nadiah Abd; Majid, Ahmad Abd.; Azmi, Amirah

    2017-08-01

    The nonlinear Dym equation is solved numerically using the quartic B-spline (QuBS) and quartic trigonometric B-spline (QuTBS) collocation methods. The QuBS and QuTBS are utilized as interpolating functions in the spatial dimension while the finite difference method (FDM) is applied to discretize the temporal space with the help of theta-weighted method. The nonlinear term in the Dym equation is linearized using Taylor's expansion. Two schemes are performed on both methods which are Crank-Nicolson and fully implicit. Applying the Von-Neumann stability analysis, these schemes are found to be conditionally stable. Several numerical examples of different forms are discussed and compared in term of errors with exact solutions and results from the FDM.

  5. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    Science.gov (United States)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  6. Shape Designing of Engineering Images Using Rational Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Muhammad Sarfraz

    2015-01-01

    Full Text Available In modern days, engineers encounter a remarkable range of different engineering problems like study of structure, structure properties, and designing of different engineering images, for example, automotive images, aerospace industrial images, architectural designs, shipbuilding, and so forth. This paper purposes an interactive curve scheme for designing engineering images. The purposed scheme furnishes object designing not just in the area of engineering, but it is equally useful for other areas including image processing (IP, Computer Graphics (CG, Computer-Aided Engineering (CAE, Computer-Aided Manufacturing (CAM, and Computer-Aided Design (CAD. As a method, a piecewise rational cubic spline interpolant, with four shape parameters, has been purposed. The method provides effective results together with the effects of derivatives and shape parameters on the shape of the curves in a local and global manner. The spline method, due to its most generalized description, recovers various existing rational spline methods and serves as an alternative to various other methods including v-splines, gamma splines, weighted splines, and beta splines.

  7. Detrending of non-stationary noise data by spline techniques

    International Nuclear Information System (INIS)

    Behringer, K.

    1989-11-01

    An off-line method for detrending non-stationary noise data has been investigated. It uses a least squares spline approximation of the noise data with equally spaced breakpoints. Subtraction of the spline approximation from the noise signal at each data point gives a residual noise signal. The method acts as a high-pass filter with very sharp frequency cutoff. The cutoff frequency is determined by the breakpoint distance. The steepness of the cutoff is controlled by the spline order. (author) 12 figs., 1 tab., 5 refs

  8. Identification of Hammerstein models with cubic spline nonlinearities.

    Science.gov (United States)

    Dempsey, Erika J; Westwick, David T

    2004-02-01

    This paper considers the use of cubic splines, instead of polynomials, to represent the static nonlinearities in block structured models. It introduces a system identification algorithm for the Hammerstein structure, a static nonlinearity followed by a linear filter, where cubic splines represent the static nonlinearity and the linear dynamics are modeled using a finite impulse response filter. The algorithm uses a separable least squares Levenberg-Marquardt optimization to identify Hammerstein cascades whose nonlinearities are modeled by either cubic splines or polynomials. These algorithms are compared in simulation, where the effects of variations in the input spectrum and distribution, and those of the measurement noise are examined. The two algorithms are used to fit Hammerstein models to stretch reflex electromyogram (EMG) data recorded from a spinal cord injured patient. The model with the cubic spline nonlinearity provides more accurate predictions of the reflex EMG than the polynomial based model, even in novel data.

  9. Segmented Regression Based on B-Splines with Solved Examples

    Directory of Open Access Journals (Sweden)

    Miloš Kaňka

    2015-12-01

    Full Text Available The subject of the paper is segmented linear, quadratic, and cubic regression based on B-spline basis functions. In this article we expose the formulas for the computation of B-splines of order one, two, and three that is needed to construct linear, quadratic, and cubic regression. We list some interesting properties of these functions. For a clearer understanding we give the solutions of a couple of elementary exercises regarding these functions.

  10. Spline Smoothing for Estimation of Circular Probability Distributions via Spectral Isomorphism and its Spatial Adaptation

    OpenAIRE

    Basu, Kinjal; Sengupta, Debapriya

    2012-01-01

    Consider the problem when $X_1,X_2,..., X_n$ are distributed on a circle following an unknown distribution $F$ on $S^1$. In this article we have consider the absolute general set-up where the density can have local features such as discontinuities and edges. Furthermore, there can be outlying data which can follow some discrete distributions. The traditional Kernel Density Estimation methods fail to identify such local features in the data. Here we device a non-parametric density estimate on ...

  11. Growth curve analysis for plasma profiles using smoothing splines. Final report, January 1993--January 1995

    International Nuclear Information System (INIS)

    Imre, K.

    1995-07-01

    In this project, we parameterize the shape and magnitude of the temperature and density profiles on JET and the temperature profiles on TFTR. The key control variables for the profiles were tabulated and the response functions were estimated. A sophisticated statistical analysis code was developed to fit the plasma profiles. Our analysis indicate that the JET density shape depends primarily on bar n/B t for Ohmic heating, bar n for L-mode and I p for H-mode. The temperature profiles for JET are mainly determined by q 95 for the case of Ohmic heating, and by B t and P/bar n for the L-mode. For the H-mode the shape depends on the type of auxiliary heating, Z eff , N bar n, q 95 , and P

  12. Estimation of Posterior Probabilities Using Multivariate Smoothing Splines and Generalized Cross-Validation.

    Science.gov (United States)

    1983-09-01

    Ciencia y Tecnologia -Mexico, by ONR under Contract No. N00014-77-C-0675, and by ARO under Contract No. DAAG29-80-K-0042. LUJ THE VIE~W, rTIJ. ’~v ’’~c...i, 11...,M. so that [KI,: K12 la +Ud. Nifn.x* [x2,: K2a+Uza. Let P be a projection operator onto the space H 1(m.d), then; Jm f,,x) = <Pf A,Plnx...Department of Statis- tics. For financial support I thank the Consejo Nacional de Ciencia y Tecnologia - Mexico, and the Department of Statistics of the

  13. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  14. A smooth local path planning algorithm based on modified visibility graph

    Science.gov (United States)

    Lv, Taizhi; Feng, Maoyan

    2017-07-01

    Path planning is an essential and inevitable problem in robotics. Trapping in local minima and discontinuities often exist in local path planning. To overcome these drawbacks, this paper presents a smooth path planning algorithm based on modified visibility graph. This algorithm consists of three steps: (1) polygons are generated from detected obstacles; (2) a collision-free path is found by simultaneous visibility graph construction and path search by A∗ (SVGA); (3) the path is smoothed by B-spline curves and particle swarm optimization (PSO). Simulation experiment results show the effectiveness of this algorithm, and a smooth path can be found fleetly.

  15. Automatic smoothing parameter selection in GAMLSS with an application to centile estimation.

    Science.gov (United States)

    Rigby, Robert A; Stasinopoulos, Dimitrios M

    2014-08-01

    A method for automatic selection of the smoothing parameters in a generalised additive model for location, scale and shape (GAMLSS) model is introduced. The method uses a P-spline representation of the smoothing terms to express them as random effect terms with an internal (or local) maximum likelihood estimation on the predictor scale of each distribution parameter to estimate its smoothing parameters. This provides a fast method for estimating multiple smoothing parameters. The method is applied to centile estimation where all four parameters of a distribution for the response variable are modelled as smooth functions of a transformed explanatory variable x This allows smooth modelling of the location, scale, skewness and kurtosis parameters of the response variable distribution as functions of x. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  16. B-spline solver for one-electron Schrödinger equation

    Science.gov (United States)

    Romanowski, Zbigniew

    2011-11-01

    A numerical algorithm for solving the one-electron Schrödinger equation is presented. The algorithm is based on the Finite Element method, and the basis functions are tensor products of univariate B-splines. The application of cubic or higher order B-splines guarantees that the searched solution belongs to a continuous and one time differentiable function space, which is a desirable property in the Kohn-Sham equation context from the Density Functional Theory with pseudopotential approximation. The theoretical background of the numerical algorithm is presented, and additionally, the implementation on parallel computers with distributed memory is described. The current implementation of the algorithm uses the MPI, HYPRE and ParMETIS libraries to distribute matrices on processing units. Additionally, the POBPC algorithm from HYPRE library is used to solve the algebraic generalized eigenvalue problem. The proposed algorithm works for any smooth interaction potential, where the domain of the problem is a finite subspace of the ℝ3 space. The accuracy of the algorithm is demonstrated for a selected interaction potential. In the current stage, the algorithm can be applied to solve the linearized Kohn-Sham equation for molecular systems.

  17. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2003-01-01

    As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.

  18. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  19. Determination of transients and compensation capacities of breath-by-breath analysis by cubic splines.

    Science.gov (United States)

    von Golitschek, M; Schardt, F W

    1997-07-01

    The development of breath-by-breath analysis during an ergospirometry improved the precision of the measurement. However, the abundance of data yields oscillating curves which make it very difficult to detect exactly the breakpoints, maxima and minima. By using cubic splines one is able to smooth the curve of the primary data without falsifying or distorting it. A breakpoint marks the beginning of a hyperventilation with an nonlinear increase of VE or the beginning of an excess value of CO2. Furthermore, the amount of CO2 required to compensate for the acid-base balance as well as the oxygen debt in the recovery phase can be calculated by the area under the curve.

  20. Local Convexity-Preserving C 2 Rational Cubic Spline for Convex Data

    Science.gov (United States)

    Abd Majid, Ahmad; Ali, Jamaludin Md.

    2014-01-01

    We present the smooth and visually pleasant display of 2D data when it is convex, which is contribution towards the improvements over existing methods. This improvement can be used to get the more accurate results. An attempt has been made in order to develop the local convexity-preserving interpolant for convex data using C 2 rational cubic spline. It involves three families of shape parameters in its representation. Data dependent sufficient constraints are imposed on single shape parameter to conserve the inherited shape feature of data. Remaining two of these shape parameters are used for the modification of convex curve to get a visually pleasing curve according to industrial demand. The scheme is tested through several numerical examples, showing that the scheme is local, computationally economical, and visually pleasing. PMID:24757421

  1. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age

    Directory of Open Access Journals (Sweden)

    Marko Wilke

    2018-02-01

    Full Text Available This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1–75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender as well as technical (field strength, data quality predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php. Keywords: MRI template creation, Multivariate adaptive regression splines, DARTEL, Structural MRI

  2. Smooth extrapolation of unknown anatomy via statistical shape models

    Science.gov (United States)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  3. Contour propagation using non-uniform cubic B-splines for lung tumor delineation in 4D-CT.

    Science.gov (United States)

    Liu, Yongchuan; Jin, Renchao; Chen, Mi; Song, Enmin; Xu, Xiangyang; Zhang, Sheng; Hung, Chih-Cheng

    2016-12-01

    Accurate target delineation is a critical step in radiotherapy. In this study, a robust contour propagation method is proposed to help physicians delineate lung tumors in four-dimensional computer tomography (4D-CT) images efficiently and accurately. The proposed method starts with manually delineated contours on the reference phase. Each contour is fitted by a non-uniform cubic B-spline curve, and its deformation on the target phase is achieved by moving its control vertexes such that the intensity similarity between the two contours is maximized. Since contour is usually the boundary of lesion or tissue which may deform quite differently from the tissues outside the boundary, the proposed method treats each contour as a deformable entity, a non-uniform cubic B-spline curve, and focuses on the registration of contour entity instead of the entire image to avoid the deformation of contour to be smoothed by its surrounding tissues, meanwhile to greatly reduce the time consumption while keeping the accuracy of the contour propagation. Eighteen 4D-CT cases with 444 gross tumor volume (GTV) contours manually delineated slice by slice on the maximal inhale and exhale phases are used to verify the proposed method. The Jaccard similarity coefficient (JSC) between the propagated GTV and the manually delineated GTV is 0.885 ± 0.026, and the Hausdorff distance (HD) is [Formula: see text] mm. In addition, the time for propagating GTV to all the phases is 3.67 ± 3.41 minutes. The results are better than fast adaptive stochastic gradient descent (FASGD) B-spline method, 3D+t B-spline method and diffeomorphic Demons method. The proposed method is useful to help physicians delineate target volumes efficiently and accurately.

  4. Smoothed square well potential

    Energy Technology Data Exchange (ETDEWEB)

    Salamon, P. [Institute for Nuclear Research Hungarian Academy of Sciences (ATOMKI), Debrecen (Hungary); Vertse, T. [Institute for Nuclear Research Hungarian Academy of Sciences (ATOMKI), Debrecen (Hungary); University of Debrecen, Faculty of Informatics, Debrecen (Hungary)

    2017-07-15

    The classical square well potential is smoothed with a finite range smoothing function in order to get a new simple strictly finite range form for the phenomenological nuclear potential. The smoothed square well form becomes exactly zero smoothly at a finite distance, in contrast to the Woods-Saxon form. If the smoothing range is four times the diffuseness of the Woods-Saxon shape both the central and the spin-orbit terms of the Woods-Saxon shape are reproduced reasonably well. The bound single-particle energies in a Woods-Saxon potential can be well reproduced with those in the smoothed square well potential. The same is true for the complex energies of the narrow resonances. (orig.)

  5. Comparing smoothing techniques in Cox models for exposure-response relationships.

    Science.gov (United States)

    Govindarajulu, Usha S; Spiegelman, Donna; Thurston, Sally W; Ganguli, Bhaswati; Eisen, Ellen A

    2007-09-10

    To allow for non-linear exposure-response relationships, we applied flexible non-parametric smoothing techniques to models of time to lung cancer mortality in two occupational cohorts with skewed exposure distributions. We focused on three different smoothing techniques in Cox models: penalized splines, restricted cubic splines, and fractional polynomials. We compared standard software implementations of these three methods based on their visual representation and criterion for model selection. We propose a measure of the difference between a pair of curves based on the area between them, standardized by the average of the areas under the pair of curves. To capture the variation in the difference over the range of exposure, the area between curves was also calculated at percentiles of exposure and expressed as a percentage of the total difference. The dose-response curves from the three methods were similar in both studies over the denser portion of the exposure range, with the difference between curves up to the 50th percentile less than 1 per cent of the total difference. A comparison of inverse variance weighted areas applied to the data set with a more skewed exposure distribution allowed us to estimate area differences with more precision by reducing the proportion attributed to the upper 1 per cent tail region. Overall, the penalized spline and the restricted cubic spline were closer to each other than either was to the fractional polynomial. (c) 2007 John Wiley & Sons, Ltd.

  6. Forecasting Multivariate Road Traffic Flows Using Bayesian Dynamic Graphical Models, Splines and Other Traffic Variables

    NARCIS (Netherlands)

    Anacleto, Osvaldo; Queen, Catriona; Albers, Casper J.

    Traffic flow data are routinely collected for many networks worldwide. These invariably large data sets can be used as part of a traffic management system, for which good traffic flow forecasting models are crucial. The linear multiregression dynamic model (LMDM) has been shown to be promising for

  7. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data.

    Science.gov (United States)

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-04-01

    Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimall1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download atwww.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework

  8. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  9. A cubic spline approximation for problems in fluid mechanics

    Science.gov (United States)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  10. Viscous flow solutions with a cubic spline approximation

    Science.gov (United States)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is used for the solution of several problems in fluid mechanics. This procedure provides a high degree of accuracy even with a nonuniform mesh, and leads to a more accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several typical integration schemes are presented. For two-dimensional flows a spline-alternating-direction-implicit (SADI) method is evaluated. The spline procedure is assessed and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  11. Stability of Spline-Type Systems in the Abelian Case

    Directory of Open Access Journals (Sweden)

    Darian Onchis

    2017-12-01

    Full Text Available In this paper, the stability of translation-invariant spaces of distributions over locally compact groups is stated as boundedness of synthesis and projection operators. At first, a characterization of the stability of spline-type spaces is given, in the standard sense of the stability for shift-invariant spaces, that is, linear independence characterizes lower boundedness of the synthesis operator in Banach spaces of distributions. The constructive nature of the proof for Theorem 2 enabled us to constructively realize the biorthogonal system of a given one. Then, inspired by the multiresolution analysis and the Lax equivalence for general discretization schemes, we approached the stability of a sequence of spline-type spaces as uniform boundedness of projection operators. Through Theorem 3, we characterize stable sequences of stable spline-type spaces.

  12. Stability of Spline-Type Systems in the Abelian Case.

    Science.gov (United States)

    Onchis, Darian; Zappalà, Simone

    2017-12-27

    In this paper, the stability of translation-invariant spaces of distributions over locally compact groups is stated as boundedness of synthesis and projection operators. At first, a characterization of the stability of spline-type spaces is given, in the standard sense of the stability for shift-invariant spaces, that is, linear independence characterizes lower boundedness of the synthesis operator in Banach spaces of distributions. The constructive nature of the proof for Theorem 2 enabled us to constructively realize the biorthogonal system of a given one. Then, inspired by the multiresolution analysis and the Lax equivalence for general discretization schemes, we approached the stability of a sequence of spline-type spaces as uniform boundedness of projection operators. Through Theorem 3, we characterize stable sequences of stable spline-type spaces.

  13. Point based interactive image segmentation using multiquadrics splines

    Science.gov (United States)

    Meena, Sachin; Duraisamy, Prakash; Palniappan, Kannappan; Seetharaman, Guna

    2017-05-01

    Multiquadrics (MQ) are radial basis spline function that can provide an efficient interpolation of data points located in a high dimensional space. MQ were developed by Hardy to approximate geographical surfaces and terrain modelling. In this paper we frame the task of interactive image segmentation as a semi-supervised interpolation where an interpolating function learned from the user provided seed points is used to predict the labels of unlabeled pixel and the spline function used in the semi-supervised interpolation is MQ. This semi-supervised interpolation framework has a nice closed form solution which along with the fact that MQ is a radial basis spline function lead to a very fast interactive image segmentation process. Quantitative and qualitative results on the standard datasets show that MQ outperforms other regression based methods, GEBS, Ridge Regression and Logistic Regression, and popular methods like Graph Cut,4 Random Walk and Random Forest.6

  14. Bayesian non parametric modelling of Higgs pair production

    Directory of Open Access Journals (Sweden)

    Scarpa Bruno

    2017-01-01

    Full Text Available Statistical classification models are commonly used to separate a signal from a background. In this talk we face the problem of isolating the signal of Higgs pair production using the decay channel in which each boson decays into a pair of b-quarks. Typically in this context non parametric methods are used, such as Random Forests or different types of boosting tools. We remain in the same non-parametric framework, but we propose to face the problem following a Bayesian approach. A Dirichlet process is used as prior for the random effects in a logit model which is fitted by leveraging the Polya-Gamma data augmentation. Refinements of the model include the insertion in the simple model of P-splines to relate explanatory variables with the response and the use of Bayesian trees (BART to describe the atoms in the Dirichlet process.

  15. Smoothness of limit functors

    Indian Academy of Sciences (India)

    We show that X is representable by a smooth closed subscheme of X . This result generalizes a theorem of Conrad et al. (Pseudo-reductive groups (2010) Cambridge Univ. Press) where the case when X is an affine smooth group and G G m , S acts as a group automorphisms of X is considered. It also occurs as a special ...

  16. Smoothness of limit functors

    Indian Academy of Sciences (India)

    We show that Xλ is representable by a smooth closed subscheme of X. This result generalizes a theorem of Conrad et al. (Pseudo-reductive groups. (2010) Cambridge Univ. Press) where the case when X is an affine smooth group and. Gm,S acts as a group automorphisms of X is considered. It also occurs as a special case.

  17. Formation of Reflecting Surfaces Based on Spline Methods

    Science.gov (United States)

    Zamyatin, A. V.; Zamyatina, E. A.

    2017-11-01

    The article deals with problem of reflecting barriers surfaces generation by spline methods. The cases of reflection when a geometric model is applied are considered. The surfaces of reflecting barriers are formed in such a way that they contain given points and the rays reflected at these points and hit at the defined points of specified surface. The reflecting barrier surface is formed by cubic splines. It enables a comparatively simple implementation of proposed algorithms in the form of software applications. The algorithms developed in the article can be applied in architecture and construction design for reflecting surface generation in optics and acoustics providing the geometrical model of reflex processes is used correctly.

  18. Interpolation in numerical optimization. [by cubic spline generation

    Science.gov (United States)

    Hall, K. R.; Hull, D. G.

    1975-01-01

    The present work discusses the generation of the cubic-spline interpolator in numerical optimization methods which use a variable-step integrator with step size control based on local relative truncation error. An algorithm for generating the cubic spline with successive over-relaxation is presented which represents an improvement over that given by Ralston and Wilf (1967). Rewriting the code reduces the number of N-vectors from eight to one. The algorithm is formulated in such a way that the solution of the linear system set up yields the first derivatives at the nodal points. This method is as accurate as other schemes but requires the minimum amount of storage.

  19. Shape preserving rational cubic spline for positive and convex data

    Directory of Open Access Journals (Sweden)

    Malik Zawwar Hussain

    2011-11-01

    Full Text Available In this paper, the problem of shape preserving C2 rational cubic spline has been proposed. The shapes of the positive and convex data are under discussion of the proposed spline solutions. A C2 rational cubic function with two families of free parameters has been introduced to attain the C2 positive curves from positive data and C2 convex curves from convex data. Simple data dependent constraints are derived on free parameters in the description of rational cubic function to obtain the desired shape of the data. The rational cubic schemes have unique representations.

  20. Solution of higher order boundary value problems by spline methods

    Science.gov (United States)

    Chaurasia, Anju; Srivastava, P. C.; Gupta, Yogesh

    2017-10-01

    Spline solution of Boundary Value Problems has received much attention in recent years. It has proven to be a powerful tool due to the ease of use and quality of results. This paper concerns with the survey of methods that try to approximate the solution of higher order BVPs using various spline functions. The purpose of this article is to thrash out the problems as well as conclusions, reached by the numerous authors in the related field. We critically assess many important relevant papers, published in reputed journals during last six years.

  1. Quantum State Smoothing.

    Science.gov (United States)

    Guevara, Ivonne; Wiseman, Howard

    2015-10-30

    Smoothing is an estimation method whereby a classical state (probability distribution for classical variables) at a given time is conditioned on all-time (both earlier and later) observations. Here we define a smoothed quantum state for a partially monitored open quantum system, conditioned on an all-time monitoring-derived record. We calculate the smoothed distribution for a hypothetical unobserved record which, when added to the real record, would complete the monitoring, yielding a pure-state "quantum trajectory." Averaging the pure state over this smoothed distribution yields the (mixed) smoothed quantum state. We study how the choice of actual unraveling affects the purity increase over that of the conventional (filtered) state conditioned only on the past record.

  2. Smooth polyhedral surfaces

    KAUST Repository

    Günther, Felix

    2017-03-15

    Polyhedral surfaces are fundamental objects in architectural geometry and industrial design. Whereas closeness of a given mesh to a smooth reference surface and its suitability for numerical simulations were already studied extensively, the aim of our work is to find and to discuss suitable assessments of smoothness of polyhedral surfaces that only take the geometry of the polyhedral surface itself into account. Motivated by analogies to classical differential geometry, we propose a theory of smoothness of polyhedral surfaces including suitable notions of normal vectors, tangent planes, asymptotic directions, and parabolic curves that are invariant under projective transformations. It is remarkable that seemingly mild conditions significantly limit the shapes of faces of a smooth polyhedral surface. Besides being of theoretical interest, we believe that smoothness of polyhedral surfaces is of interest in the architectural context, where vertices and edges of polyhedral surfaces are highly visible.

  3. Bayesian microsaccade detection

    Science.gov (United States)

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  4. Bayesian Kernel Mixtures for Counts.

    Science.gov (United States)

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  5. Smoothing internal migration age profiles for comparative research

    Directory of Open Access Journals (Sweden)

    Aude Bernard

    2015-05-01

    Full Text Available Background: Age patterns are a key dimension to compare migration between countries and over time. Comparative metrics can be reliably computed only if data capture the underlying age distribution of migration. Model schedules, the prevailing smoothing method, fit a composite exponential function, but are sensitive to function selection and initial parameter setting. Although non-parametric alternatives exist, their performance is yet to be established. Objective: We compare cubic splines and kernel regressions against model schedules by assessingwhich method provides an accurate representation of the age profile and best performs on metrics for comparing aggregate age patterns. Methods: We use full population microdata for Chile to perform 1,000 Monte-Carlo simulations for nine sample sizes and two spatial scales. We use residual and graphic analysis to assess model performance on the age and intensity at which migration peaks and the evolution of migration age patterns. Results: Model schedules generate a better fit when (1 the expected distribution of the age profile is known a priori, (2 the pre-determined shape of the model schedule adequately describes the true age distribution, and (3 the component curves and initial parameter values can be correctly set. When any of these conditions is not met, kernel regressions and cubic splines offer more reliable alternatives. Conclusions: Smoothing models should be selected according to research aims, age profile characteristics, and sample size. Kernel regressions and cubic splines enable a precise representation of aggregate migration age profiles for most sample sizes, without requiring parameter setting or imposing a pre-determined distribution, and therefore facilitate objective comparison.

  6. Deficiencies in the Theory of Free-Knot and Variable-Knot Spline ...

    African Journals Online (AJOL)

    This paper revisits the theory and practical implementation of graduation of mortality rates using spline functions, and in particular, variable-knot cubic spline graduation. The paper contrasts the actuarial literature on free-knot splines with the mathematical literature. It finds that the practical difficulties of implementing ...

  7. Bayesian Recovery of the Initial Condition for the Heat Equation

    NARCIS (Netherlands)

    Knapik, B.T.; van der Vaart, A.W.; van Zanten, J.H.

    2013-01-01

    We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a parameter quantifying "smoothness" and show that the corresponding posterior distributions contract

  8. C2-rational cubic spline involving tension parameters

    Indian Academy of Sciences (India)

    preferred which preserves some of the characteristics of the function to be interpolated. In order to tackle such ... Shape preserving properties of the rational (cubic/quadratic) spline interpolant have been studied ... tension parameters which is used to interpolate the given monotonic data is described in. [6]. Shape preserving ...

  9. Differential constraints for bounded recursive identification with multivariate splines

    NARCIS (Netherlands)

    De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2011-01-01

    The ability to perform online model identification for nonlinear systems with unknown dynamics is essential to any adaptive model-based control system. In this paper, a new differential equality constrained recursive least squares estimator for multivariate simplex splines is presented that is able

  10. Approximate Implicitization of Parametric Curves Using Cubic Algebraic Splines

    Directory of Open Access Journals (Sweden)

    Xiaolei Zhang

    2009-01-01

    Full Text Available This paper presents an algorithm to solve the approximate implicitization of planar parametric curves using cubic algebraic splines. It applies piecewise cubic algebraic curves to give a global G2 continuity approximation to planar parametric curves. Approximation error on approximate implicitization of rational curves is given. Several examples are provided to prove that the proposed method is flexible and efficient.

  11. Cubic spline approximation techniques for parameter estimation in distributed systems

    Science.gov (United States)

    Banks, H. T.; Crowley, J. M.; Kunisch, K.

    1983-01-01

    Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.

  12. C2-rational cubic spline involving tension parameters

    Indian Academy of Sciences (India)

    In the present paper, 1-piecewise rational cubic spline function involving tension parameters is considered which produces a monotonic interpolant to a given monotonic data set. It is observed that under certain conditions the interpolant preserves the convexity property of the data set. The existence and uniqueness of a ...

  13. Counterexamples to the B-spline Conjecture for Gabor Frames

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Nielsen, Kamilla Haahr

    2016-01-01

    The frame set conjecture for B-splines Bn, n≥2, states that the frame set is the maximal set that avoids the known obstructions. We show that any hyperbola of the form ab=r, where r is a rational number smaller than one and a and b denote the sampling and modulation rates, respectively, has infin...

  14. Kriging and thin plate splines for mapping climate variables

    NARCIS (Netherlands)

    Boer, E.P.J.; Beurs, de K.M.; Hartkamp, A.D.

    2001-01-01

    Four forms of kriging and three forms of thin plate splines are discussed in this paper to predict monthly maximum temperature and monthly mean precipitation in Jalisco State of Mexico. Results show that techniques using elevation as additional information improve the prediction results

  15. Spline function fit for multi-sets of correlative data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhou Hongmo

    1992-01-01

    The Spline fit method for multi-sets of correlative data is developed. The properties of correlative data fit are investigated. The data of 23 Na(n, 2n) cross section are fitted in the cases with and without correlation

  16. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  17. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  18. A Quadratic Spline based Interface (QUASI) reconstruction algorithm for accurate tracking of two-phase flows

    Science.gov (United States)

    Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.

    2009-12-01

    A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.

  19. Human airway smooth muscle

    NARCIS (Netherlands)

    J.C. de Jongste (Johan)

    1987-01-01

    textabstractThe function of airway smooth muscle in normal subjects is not evident. Possible physiological roles include maintenance of optimal regional ventilation/perfusion ratios, reduction of anatomic dead space, stabilisation of cartilaginous bronchi, defense against impurities and, less

  20. Laplacians on smooth distributions

    Science.gov (United States)

    Kordyukov, Yu. A.

    2017-10-01

    Let M be a compact smooth manifold equipped with a positive smooth density μ and let H be a smooth distribution endowed with a fibrewise inner product g. We define the Laplacian Δ_H associated with (H,μ,g) and prove that it gives rise to an unbounded self-adjoint operator in L^2(M,μ). Then, assuming that H generates a singular foliation \\mathscr F, we prove that, for any function \\varphi in the Schwartz space \\mathscr S( R), the operator \\varphi(Δ_H) is a smoothing operator in the scale of longitudinal Sobolev spaces associated with \\mathscr F. The proofs are based on pseudodifferential calculus on singular foliations, which was developed by Androulidakis and Skandalis, and on subelliptic estimates for Δ_H. Bibliography: 35 titles.

  1. Human airway smooth muscle

    OpenAIRE

    Jongste, Johan

    1987-01-01

    textabstractThe function of airway smooth muscle in normal subjects is not evident. Possible physiological roles include maintenance of optimal regional ventilation/perfusion ratios, reduction of anatomic dead space, stabilisation of cartilaginous bronchi, defense against impurities and, less likely, squeezing mucus out of mucous glands and pulling open the alveoli next to the airways1 . Any role of airway smooth muscle is necessarily limited, because an important degree of contraction will l...

  2. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  3. The Bayesian Score Statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.; Kleijn, R.; Paap, R.

    2000-01-01

    We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike

  4. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  5. Television images identification in the vision system basis on the mathematical apparatus of cubic normalized B-splines

    Directory of Open Access Journals (Sweden)

    Krutov Vladimir

    2017-01-01

    Full Text Available The solution the task of television image identification is used in industry when creating autonomous robots and systems of technical vision. A similar problem also arises in the development of image analysis systems to function under the influence of various interfering factors in complex observation conditions complicated the registration process and existing when priori information is absent, in background noise type. One of the most important operators is the contour selection operator. Methods and algorithms of processing information from image sensors must take into account the different character of noise associated with images and signals registration. The solution of the task of isolating contours, and in fact of digital differentiation of two-dimensional signals registered against a different character of background noise, is far from trivial. This is due to the fact that such task is incorrect. In modern information systems, methods of numerical differentiation or masks are usually used to solve the task of isolating contours. The paper considers a new method of differentiating measurement results against a noise background using the modern mathematical apparatus of cubic smoothing B-splines. The new high-precision method of digital differentiation of signals using splines is proposed for the first time, without using standard numerical differentiation procedures, to calculate the values of the derivatives with high accuracy. In fact, a method has been developed for calculating the image gradient module using spline differentiation. The method, as proved by experimental studies, and computational experiments has higher noise immunity than algorithms based on standard differentiation procedures using masks.

  6. Consistency and Monte Carlo Simulation of a Data Driven Version of smooth Goodness-of-Fit Tests

    NARCIS (Netherlands)

    Kallenberg, W.C.M.; Ledwina, Teresa

    1995-01-01

    The data driven method of selecting the number of components in Neyman's smooth test for uniformity, introduced by Ledwina, is extended. The resulting tests consist of a combination of Schwarz's Bayesian information criterion (BIC) procedure and smooth tests. The upper bound of the dimension of the

  7. High-order numerical solutions using cubic splines

    Science.gov (United States)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.

  8. Data interpolation using rational cubic Ball spline with three parameters

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul

    2016-11-01

    Data interpolation is an important task for scientific visualization. This research introduces new rational cubic Ball spline scheme with three parameters. The rational cubic Ball will be used for data interpolation with or without true derivative values. Error estimation show that the proposed scheme works well and is a very good interpolant to approximate the function. All graphical examples are presented by using Mathematica software.

  9. Cubic Splines for Trachea and Bronchial Tubes Grid Generation

    Directory of Open Access Journals (Sweden)

    Eliandro Rodrigues Cirilo

    2006-02-01

    Full Text Available Grid generation plays an important role in the development of efficient numerical techniques for solving complex flows. Therefore, the present work develops a method for bidimensional blocks structured grid generation for geometries such as the trachea and bronchial tubes. A set of 55 blocks completes the geometry, whose contours are defined by cubic splines. Besides, this technique build on early ones because of its simplicity and efficiency in terms of very complex geometry grid generation.

  10. Uncertainty Quantification using Epi-Splines and Soft Information

    Science.gov (United States)

    2012-06-01

    prediction of the behavior of constructed models of phenomena in physics, 1 biology, chemistry, ecology, engineered sytems , politics, etc. ... Results...spline framework being applied to one of the most common, yet most complex, systems known – the human body . Chapter 5 concludes the thesis by...complex a system known to man than that of the human body . The number of variables im- pacting the performance of one human over another in a given

  11. Numerical simulation of Burgers' equation using cubic B-splines

    Science.gov (United States)

    Lakshmi, C.; Awasthi, Ashish

    2017-03-01

    In this paper, a numerical θ scheme is proposed for solving nonlinear Burgers' equation. By employing Hopf-Cole transformation, the nonlinear Burgers' equation is linearized to the linear Heat equation. The resulting Heat equation is further solved by cubic B-splines. The time discretization of linear Heat equation is carried out using Crank-Nicolson scheme (θ = {1 \\over 2}) as well as backward Euler scheme (θ = 1). Accuracy in temporal direction is improved by using Richardson extrapolation. This method hence possesses fourth order accuracy both in space and time. The system of matrix which arises by using cubic splines is always diagonal. Therefore, working with splines has the advantage of reduced computational cost and easy implementation. Stability of the schemes have been discussed in detail and shown to be unconditionally stable. Three examples have been examined and the L2 and L∞ error norms have been calculated to establish the performance of the method. The numerical results obtained on applying this method have shown to give more accurate results than existing works of Kutluay et al. [1], Ozis et al. [2], Dag et al. [3], Salkuyeh et al. [4] and Korkmaz et al. [5].

  12. Testing the Performance of Cubic Splines and Nelson-Siegel Model for Estimating the Zero-coupon Yield Curve

    Directory of Open Access Journals (Sweden)

    Lorenčič Eva

    2016-06-01

    Full Text Available Understanding the relationship between interest rates and term to maturity of securities is a prerequisite for developing financial theory and evaluating whether it holds up in the real world; therefore, such an understanding lies at the heart of monetary and financial economics. Accurately fitting the term structure of interest rates is the backbone of a smoothly functioning financial market, which is why the testing of various models for estimating and predicting the term structure of interest rates is an important topic in finance that has received considerable attention for many decades. In this paper, we empirically contrast the performance of cubic splines and the Nelson-Siegel model by estimating the zero-coupon yields of Austrian government bonds. The main conclusion that can be drawn from the results of the calculations is that the Nelson-Siegel model outperforms cubic splines at the short end of the yield curve (up to 2 years, whereas for medium-term maturities (2 to 10 years the fitting performance of both models is comparable.

  13. A scalable block-preconditioning strategy for divergence-conforming B-spline discretizations of the Stokes problem

    KAUST Repository

    Cortes, Adriano Mauricio

    2016-10-01

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence-free. When applied to the discretized Stokes problem, these spaces generate a symmetric and indefinite saddle-point linear system. The iterative method of choice to solve such system is the Generalized Minimum Residual Method. This method lacks robustness, and one remedy is to use preconditioners. For linear systems of saddle-point type, a large family of preconditioners can be obtained by using a block factorization of the system. In this paper, we show how the nesting of “black-box” solvers and preconditioners can be put together in a block triangular strategy to build a scalable block preconditioner for the Stokes system discretized by divergence-conforming B-splines. Besides the known cavity flow problem, we used for benchmark flows defined on complex geometries: an eccentric annulus and hollow torus of an eccentric annular cross-section.

  14. The modeling of quadratic B-splines surfaces for the tomographic reconstruction in the FCC- type-riser

    International Nuclear Information System (INIS)

    Vasconcelos, Geovane Vitor; Dantas, Carlos Costa; Melo, Silvio de Barros; Pires, Renan Ferraz

    2009-01-01

    The 3D tomography reconstruction has been a profitable alternative in the analysis of the FCC-type- riser (Fluid Catalytic Cracking), for appropriately keeping track of the sectional catalyst concentration distribution in the process of oil refining. The method of tomography reconstruction proposed by M. Azzi and colleagues (1991) uses a relatively small amount of trajectories (from 3 to 5) and projections (from 5 to 7) of gamma rays, a desirable feature in the industrial process tomography. Compared to more popular methods, such as the FBP (Filtered Back Projection), which demands a much higher amount of gamma rays projections, the method by Azzi et al. is more appropriate for the industrial process, where the physical limitations and the cost of the process require more economical arrangements. The use of few projections and trajectories facilitates the diagnosis in the flow dynamical process. This article proposes an improvement in the basis functions introduced by Azzi et al., through the use of quadratic B-splines functions. The use of B-splines functions makes possible a smoother surface reconstruction of the density distribution, since the functions are continuous and smooth. This work describes how the modeling can be done. (author)

  15. USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES

    Directory of Open Access Journals (Sweden)

    Otgon Cristian

    2011-07-01

    Full Text Available The paper aims to approach innovative financial instruments for the management of public resources. In the category of these innovative tools have been included polynomial spline functions used for budgetary sizing in the substantiating of fiscal and budgetary policies. In order to use polynomial spline functions there have been made a number of steps consisted in the establishment of nodes, the calculation of specific coefficients corresponding to the spline functions, development and determination of errors of approximation. Also in this paper was done extrapolation of series of property tax data using polynomial spline functions of order I. For spline impelementation were taken two series of data, one reffering to property tax as a resultative variable and the second one reffering to building tax, resulting a correlation indicator R=0,95. Moreover the calculation of spline functions are easy to solve and due to small errors of approximation have a great power of predictibility, much better than using ordinary least squares method. In order to realise the research there have been used as methods of research several steps, namely observation, series of data construction and processing the data with spline functions. The data construction is a daily series gathered from the budget account, reffering to building tax and property tax. The added value of this paper is given by the possibility of avoiding deficits by using spline functions as innovative instruments in the publlic finance, the original contribution is made by the average of splines resulted from the series of data. The research results lead to conclusion that the polynomial spline functions are recommended to form the elaboration of fiscal and budgetary policies, due to relatively small errors obtained in the extrapolation of economic processes and phenomena. Future research directions are taking in consideration to study the polynomial spline functions of second-order, third

  16. Solving nonlinear Benjamin-Bona-Mahony equation using cubic B-spline and cubic trigonometric B-spline collocation methods

    Science.gov (United States)

    Rahan, Nur Nadiah Mohd; Ishak, Siti Noor Shahira; Hamid, Nur Nadiah Abd; Majid, Ahmad Abd.; Azmi, Amirah

    2017-04-01

    In this research, the nonlinear Benjamin-Bona-Mahony (BBM) equation is solved numerically using the cubic B-spline (CuBS) and cubic trigonometric B-spline (CuTBS) collocation methods. The CuBS and CuTBS are utilized as interpolating functions in the spatial dimension while the standard finite difference method (FDM) is applied to discretize the temporal space. In order to solve the nonlinear problem, the BBM equation is linearized using Taylor's expansion. Applying the von-Neumann stability analysis, the proposed techniques are shown to be unconditionally stable under the Crank-Nicolson scheme. Several numerical examples are discussed and compared with exact solutions and results from the FDM.

  17. Bayesian data analysis for newcomers.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.

  18. Generalizing smooth transition autoregressions

    DEFF Research Database (Denmark)

    Chini, Emilio Zanetti

    We introduce a variant of the smooth transition autoregression - the GSTAR model - capable to parametrize the asymmetry in the tails of the transition equation by using a particular generalization of the logistic function. A General-to-Specific modelling strategy is discussed in detail...

  19. Smoothed Complexity Theory

    NARCIS (Netherlands)

    Bläser, Markus; Manthey, Bodo

    Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng. Classical methods like worst-case or average-case analysis have accompanying complexity classes, such as P and Avg-P, respectively. Whereas worst-case or average-case analysis give us a means to talk about the

  20. Bayesian methods for data analysis

    CERN Document Server

    Carlin, Bradley P.

    2009-01-01

    Approaches for statistical inference Introduction Motivating Vignettes Defining the Approaches The Bayes-Frequentist Controversy Some Basic Bayesian Models The Bayes approach Introduction Prior Distributions Bayesian Inference Hierarchical Modeling Model Assessment Nonparametric Methods Bayesian computation Introduction Asymptotic Methods Noniterative Monte Carlo Methods Markov Chain Monte Carlo Methods Model criticism and selection Bayesian Modeling Bayesian Robustness Model Assessment Bayes Factors via Marginal Density Estimation Bayes Factors

  1. Statistics: a Bayesian perspective

    National Research Council Canada - National Science Library

    Berry, Donald A

    1996-01-01

    ...: it is the only introductory textbook based on Bayesian ideas, it combines concepts and methods, it presents statistics as a means of integrating data into the significant process, it develops ideas...

  2. Noncausal Bayesian Vector Autoregression

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...

  3. Practical Bayesian tomography

    Science.gov (United States)

    Granade, Christopher; Combes, Joshua; Cory, D. G.

    2016-03-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of-the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we address all three problems. First, we use modern statistical methods, as pioneered by Huszár and Houlsby (2012 Phys. Rev. A 85 052120) and by Ferrie (2014 New J. Phys. 16 093035), to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first priors on quantum states and channels that allow for including useful experimental insight. Finally, we develop a method that allows tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  4. Variational Bayesian Filtering

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2008-01-01

    Roč. 56, č. 10 (2008), s. 5020-5030 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian filtering * particle filtering * Variational Bayes Subject RIV: BC - Control Systems Theory Impact factor: 2.335, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/smidl-variational bayesian filtering.pdf

  5. Bayesian Networks An Introduction

    CERN Document Server

    Koski, Timo

    2009-01-01

    Bayesian Networks: An Introduction provides a self-contained introduction to the theory and applications of Bayesian networks, a topic of interest and importance for statisticians, computer scientists and those involved in modelling complex data sets. The material has been extensively tested in classroom teaching and assumes a basic knowledge of probability, statistics and mathematics. All notions are carefully explained and feature exercises throughout. Features include:.: An introduction to Dirichlet Distribution, Exponential Families and their applications.; A detailed description of learni

  6. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    International Nuclear Information System (INIS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-01-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method. (paper)

  7. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    Science.gov (United States)

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  8. Spline Approximation-Based Optimization of Multi-component Disperse Reinforced Composites

    Directory of Open Access Journals (Sweden)

    Yu. I. Dimitrienko

    2015-01-01

    Full Text Available The paper suggests an algorithm for solving the problems of optimal design of multicomponent disperse-reinforced composite materials, which properties are defined by filler concentrations and are independent of their shape. It formulates the problem of conditional optimization of a composite with restrictions on its effective parameters - the elasticity modulus, tension and compression strengths, and heat-conductivity coefficient with minimized composite density. The effective characteristics of a composite were computed by finite-element solving the auxiliary local problems of elasticity and heat-conductivity theories appearing when the asymptotic averaging method is applied.The algorithm suggested to solve the optimization problem includes the following main stages:1 finding a set of solutions for direct problem to calculate the effective characteristics;2 constructing the curves of effective characteristics versus filler concentrations by means of approximating functions, which are offered for use as a thin plate spline with smoothing;3 constructing a set of points to satisfy restrictions and a boundary of the point set to satisfy restrictions obtaining, as a result, a contour which can be parameterized;4 defining a global density minimum over the contour through psi-transformation.A numerical example of solving the optimization problem was given for a dispersereinforced composite with two types of fillers being hollow microspheres: glass and phenolic. It was shown that the suggested algorithm allows us to find optimal filler concentrations efficiently enough.

  9. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    Science.gov (United States)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-11-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.

  10. Contour propagation for lung tumor delineation in 4D-CT using tensor-product surface of uniform and non-uniform closed cubic B-splines

    Science.gov (United States)

    Jin, Renchao; Liu, Yongchuan; Chen, Mi; Zhang, Sheng; Song, Enmin

    2018-01-01

    A robust contour propagation method is proposed to help physicians delineate lung tumors on all phase images of four-dimensional computed tomography (4D-CT) by only manually delineating the contours on a reference phase. The proposed method models the trajectory surface swept by a contour in a respiratory cycle as a tensor-product surface of two closed cubic B-spline curves: a non-uniform B-spline curve which models the contour and a uniform B-spline curve which models the trajectory of a point on the contour. The surface is treated as a deformable entity, and is optimized from an initial surface by moving its control vertices such that the sum of the intensity similarities between the sampling points on the manually delineated contour and their corresponding ones on different phases is maximized. The initial surface is constructed by fitting the manually delineated contour on the reference phase with a closed B-spline curve. In this way, the proposed method can focus the registration on the contour instead of the entire image to prevent the deformation of the contour from being smoothed by its surrounding tissues, and greatly reduce the time consumption while keeping the accuracy of the contour propagation as well as the temporal consistency of the estimated respiratory motions across all phases in 4D-CT. Eighteen 4D-CT cases with 235 gross tumor volume (GTV) contours on the maximal inhale phase and 209 GTV contours on the maximal exhale phase are manually delineated slice by slice. The maximal inhale phase is used as the reference phase, which provides the initial contours. On the maximal exhale phase, the Jaccard similarity coefficient between the propagated GTV and the manually delineated GTV is 0.881 +/- 0.026, and the Hausdorff distance is 3.07 +/- 1.08 mm. The time for propagating the GTV to all phases is 5.55 +/- 6.21 min. The results are better than those of the fast adaptive stochastic gradient descent B-spline method, the 3D  +  t B-spline

  11. Revealed smooth nontransitive preferences

    DEFF Research Database (Denmark)

    Keiding, Hans; Tvede, Mich

    2013-01-01

    In the present paper, we are concerned with the behavioural consequences of consumers having nontransitive preference relations. Data sets consist of finitely many observations of price vectors and consumption bundles. A preference relation rationalizes a data set provided that for every observed...... many observations of price vectors, lists of individual incomes and aggregate demands. We apply our main result to characterize market data sets consistent with equilibrium behaviour of pure-exchange economies with smooth nontransitive consumers....... consumption bundle, all strictly preferred bundles are more expensive than the observed bundle. Our main result is that data sets can be rationalized by a smooth nontransitive preference relation if and only if prices can normalized such that the law of demand is satisfied. Market data sets consist of finitely...

  12. Analysis of ambulatory blood pressure monitor data using a hierarchical model incorporating restricted cubic splines and heterogeneous within-subject variances.

    Science.gov (United States)

    Lambert, P C; Abrams, K R; Jones, D R; Halligan, A W; Shennan, A

    2001-12-30

    Hypertensive disorders of pregnancy are associated with significant maternal and foetal morbidity. Measurement of blood pressure remains the standard way of identifying individuals at risk. There is growing interest in the use of ambulatory blood pressure monitors (ABPM), which can record an individual's blood pressure many times over a 24-hour period. From a clinical perspective interest lies in the shape of the blood pressure profile over a 24-hour period and any differences in the profile between groups. We propose a two-level hierarchical linear model incorporating all ABPM data into a single model. We contrast a classical approach with a Bayesian approach using the results of a study of 206 pregnant women who were asked to wear an ABPM for 24 hours after referral to an obstetric day unit with high blood pressure. As the main interest lies in the shape of the profile, we use restricted cubic splines to model the mean profiles. The use of restricted cubic splines provides a flexible way to model the mean profiles and to make comparisons between groups. From examining the data and the fit of the model it is apparent that there were heterogeneous within-subject variances in that some women tend to have more variable blood pressure than others. Within the Bayesian framework it is relatively easy to incorporate a random effect to model the between-subject variation in the within-subject variances. Although there is substantial heterogeneity in the within-subject variances, allowing for this in the model has surprisingly little impact on the estimates of the mean profiles or their confidence/credible intervals. We thus demonstrate a powerful method for analysis of ABPM data and also demonstrate how heterogeneous within-subject variances can be modelled from a Bayesian perspective. Copyright 2001 John Wiley & Sons, Ltd.

  13. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  14. MBIS: multivariate Bayesian image segmentation tool.

    Science.gov (United States)

    Esteban, Oscar; Wollny, Gert; Gorthi, Subrahmanyam; Ledesma-Carbayo, María-J; Thiran, Jean-Philippe; Santos, Andrés; Bach-Cuadra, Meritxell

    2014-07-01

    We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Semisupervised feature selection via spline regression for video semantic recognition.

    Science.gov (United States)

    Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang

    2015-02-01

    To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods.

  16. Thin-plate spline analysis of mandibular growth.

    Science.gov (United States)

    Franchi, L; Baccetti, T; McNamara, J A

    2001-04-01

    The analysis of mandibular growth changes around the pubertal spurt in humans has several important implications for the diagnosis and orthopedic correction of skeletal disharmonies. The purpose of this study was to evaluate mandibular shape and size growth changes around the pubertal spurt in a longitudinal sample of subjects with normal occlusion by means of an appropriate morphometric technique (thin-plate spline analysis). Ten mandibular landmarks were identified on lateral cephalograms of 29 subjects at 6 different developmental phases. The 6 phases corresponded to 6 different maturational stages in cervical vertebrae during accelerative and decelerative phases of the pubertal growth curve of the mandible. Differences in shape between average mandibular configurations at the 6 developmental stages were visualized by means of thin-plate spline analysis and subjected to permutation test. Centroid size was used as the measure of the geometric size of each mandibular specimen. Differences in size at the 6 developmental phases were tested statistically. The results of graphical analysis indicated a statistically significant change in mandibular shape only for the growth interval from stage 3 to stage 4 in cervical vertebral maturation. Significant increases in centroid size were found at all developmental phases, with evidence of a prepubertal minimum and of a pubertal maximum. The existence of a pubertal peak in human mandibular growth, therefore, is confirmed by thin-plate spline analysis. Significant morphological changes in the mandible during the growth interval from stage 3 to stage 4 in cervical vertebral maturation may be described as an upward-forward direction of condylar growth determining an overall "shrinkage" of the mandibular configuration along the measurement of total mandibular length. This biological mechanism is particularly efficient in compensating for major increments in mandibular size at the adolescent spurt.

  17. Smooth bandpass empirical mode decomposition with rolling ball sifting for extracting carotid bruits and heart sounds.

    Science.gov (United States)

    Huang, Adam; Min-Yin Liu; Chung-Wei Lee; Hon-Man Liu

    2017-07-01

    Carotid bruits are systolic sounds associated with turbulent blood flow through atherosclerotic stenosis in the neck. They are audible intermittent high-frequency sounds mixed with low-frequency heart sounds that wax and wane periodically. It is a nontrivial problem to extract both bruits and heart sounds with high fidelity for further computer-aided analysis. In this paper we propose a smooth bandpass empirical mode decomposition (EMD) method to tackle the problem in the time domain. First, bandpass EMD is achieved by using a rolling ball algorithm to sift the local extrema of chosen time-scales. Second, the local zero is smoothed by interpolation with a monotone piecewise cubic spline. Preliminary results indicate that the new method is able to extract both carotid bruits and heart sounds as visually smooth oscillating components.

  18. Achieving high data reduction with integral cubic B-splines

    Science.gov (United States)

    Chou, Jin J.

    1993-01-01

    During geometry processing, tangent directions at the data points are frequently readily available from the computation process that generates the points. It is desirable to utilize this information to improve the accuracy of curve fitting and to improve data reduction. This paper presents a curve fitting method which utilizes both position and tangent direction data. This method produces G(exp 1) non-rational B-spline curves. From the examples, the method demonstrates very good data reduction rates while maintaining high accuracy in both position and tangent direction.

  19. Preference learning with evolutionary Multivariate Adaptive Regression Spline model

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll

    2015-01-01

    This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...

  20. Gravity Aided Navigation Precise Algorithm with Gauss Spline Interpolation

    Directory of Open Access Journals (Sweden)

    WEN Chaobin

    2015-01-01

    Full Text Available The gravity compensation of error equation thoroughly should be solved before the study on gravity aided navigation with high precision. A gravity aided navigation model construction algorithm based on research the algorithm to approximate local grid gravity anomaly filed with the 2D Gauss spline interpolation is proposed. Gravity disturbance vector, standard gravity value error and Eotvos effect are all compensated in this precision model. The experiment result shows that positioning accuracy is raised by 1 times, the attitude and velocity accuracy is raised by 1~2 times and the positional error is maintained from 100~200 m.

  1. C2-rational cubic spline involving tension parameters

    Indian Academy of Sciences (India)

    the impact of variation of parameters ri and ti on the shape of the interpolant. Some remarks are given in x 6. 2. The rational spline interpolant. Let P И fxign. iИ1 where a И x1 ` x2 ` ┴┴┴ ` xn И b, be a partition of the interval ЙaY bК, let fi, i И 1Y ... Y n be the function values at the data points. We set hi И xiЗ1 └ xiY ∆i И Е ...

  2. Smooth functors vs. differential forms

    NARCIS (Netherlands)

    Schreiber, U.; Waldorf, K.

    2011-01-01

    We establish a relation between smooth 2-functors defined on the path 2-groupoid of a smooth manifold and differential forms on this manifold. This relation can be understood as a part of a dictionary between fundamental notions from category theory and differential geometry. We show that smooth

  3. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  4. Smoothly Varying Bright Blazars

    Science.gov (United States)

    Van Alfen, Nicholas; Hindman, Lauren; Moody, Joseph Ward; Biancardi, Rochelle; Whipple, Parkes; Gaunt, Caleb

    2018-01-01

    It is becoming increasingly apparent that blazar light can vary sinusoidally with periods of hundreds of days to tens of years. Such behavior is expected of, among other things, jets coming from binary black holes. To look for general variability in lesser-known blazars and AGN, in 2015-2016 we monitored 182 objects with Johnson V-band magnitudes reported as being < 16. In all, this campaign generated 22,000 frames from 2,000 unique pointings. We find that approximately one dozen of these objects show evidence of smooth variability consistent with sinusoidal periods. We report on the entire survey sample, highlighting those that show sinusoidal variations.

  5. Analyzing Single Molecule Localization Microscopy Data Using Cubic Splines.

    Science.gov (United States)

    Babcock, Hazen P; Zhuang, Xiaowei

    2017-04-03

    The resolution of super-resolution microscopy based on single molecule localization is in part determined by the accuracy of the localization algorithm. In most published approaches to date this localization is done by fitting an analytical function that approximates the point spread function (PSF) of the microscope. However, particularly for localization in 3D, analytical functions such as a Gaussian, which are computationally inexpensive, may not accurately capture the PSF shape leading to reduced fitting accuracy. On the other hand, analytical functions that can accurately capture the PSF shape, such as those based on pupil functions, can be computationally expensive. Here we investigate the use of cubic splines as an alternative fitting approach. We demonstrate that cubic splines can capture the shape of any PSF with high accuracy and that they can be used for fitting the PSF with only a 2-3x increase in computation time as compared to Gaussian fitting. We provide an open-source software package that measures the PSF of any microscope and uses the measured PSF to perform 3D single molecule localization microscopy analysis with reasonable accuracy and speed.

  6. Bayesian Decision Support

    Science.gov (United States)

    Berliner, M.

    2017-12-01

    Bayesian statistical decision theory offers a natural framework for decision-policy making in the presence of uncertainty. Key advantages of the approach include efficient incorporation of information and observations. However, in complicated settings it is very difficult, perhaps essentially impossible, to formalize the mathematical inputs needed in the approach. Nevertheless, using the approach as a template is useful for decision support; that is, organizing and communicating our analyses. Bayesian hierarchical modeling is valuable in quantifying and managing uncertainty such cases. I review some aspects of the idea emphasizing statistical model development and use in the context of sea-level rise.

  7. Bayesian Exploratory Factor Analysis

    Science.gov (United States)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517

  8. Momentum analysis by using a quintic spline model for the track

    CERN Document Server

    Wind, H

    1974-01-01

    A method is described to determine the momentum of a particle when the (inhomogeneous) analysing magnetic field and the position of at least three points on the track are known. The model of the field is essentially a cubic spline and that of the track a quintic spline. (8 refs).

  9. A Multidimensional Spline Based Global Nonlinear Aerodynamic Model for the Cessna Citation II

    NARCIS (Netherlands)

    De Visser, C.C.; Mulder, J.A.

    2010-01-01

    A new method is proposed for the identification of global nonlinear models of aircraft non-dimensional force and moment coefficients. The method is based on a recent type of multivariate spline, the multivariate simplex spline, which can accurately approximate very large, scattered nonlinear

  10. B-spline solution of a singularly perturbed boundary value problem arising in biology

    International Nuclear Information System (INIS)

    Lin Bin; Li Kaitai; Cheng Zhengxing

    2009-01-01

    We use B-spline functions to develop a numerical method for solving a singularly perturbed boundary value problem associated with biology science. We use B-spline collocation method, which leads to a tridiagonal linear system. The accuracy of the proposed method is demonstrated by test problems. The numerical result is found in good agreement with exact solution.

  11. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    Science.gov (United States)

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  12. B-LUT: Fast and low memory B-spline image interpolation.

    Science.gov (United States)

    Sarrut, David; Vandemeulebroucke, Jef

    2010-08-01

    We propose a fast alternative to B-splines in image processing based on an approximate calculation using precomputed B-spline weights. During B-spline indirect transformation, these weights are efficiently retrieved in a nearest-neighbor fashion from a look-up table, greatly reducing overall computation time. Depending on the application, calculating a B-spline using a look-up table, called B-LUT, will result in an exact or approximate B-spline calculation. In case of the latter the obtained accuracy can be controlled by the user. The method is applicable to a wide range of B-spline applications and has very low memory requirements compared to other proposed accelerations. The performance of the proposed B-LUTs was compared to conventional B-splines as implemented in the popular ITK toolkit for the general case of image intensity interpolation. Experiments illustrated that highly accurate B-spline approximation can be obtained all while computation time is reduced with a factor of 5-6. The B-LUT source code, compatible with the ITK toolkit, has been made freely available to the community. 2009 Elsevier Ireland Ltd. All rights reserved.

  13. Performance of Smoothing Methods for Reconstructing NDVI Time-Series and Estimating Vegetation Phenology from MODIS Data

    Directory of Open Access Journals (Sweden)

    Zhanzhang Cai

    2017-12-01

    Full Text Available Many time-series smoothing methods can be used for reducing noise and extracting plant phenological parameters from remotely-sensed data, but there is still no conclusive evidence in favor of one method over others. Here we use moderate-resolution imaging spectroradiometer (MODIS derived normalized difference vegetation index (NDVI to investigate five smoothing methods: Savitzky-Golay fitting (SG, locally weighted regression scatterplot smoothing (LO, spline smoothing (SP, asymmetric Gaussian function fitting (AG, and double logistic function fitting (DL. We use ground tower measured NDVI (10 sites and gross primary productivity (GPP, 4 sites to evaluate the smoothed satellite-derived NDVI time-series, and elevation data to evaluate phenology parameters derived from smoothed NDVI. The results indicate that all smoothing methods can reduce noise and improve signal quality, but that no single method always performs better than others. Overall, the local filtering methods (SG and LO can generate very accurate results if smoothing parameters are optimally calibrated. If local calibration cannot be performed, cross validation is a way to automatically determine the smoothing parameter. However, this method may in some cases generate poor fits, and when calibration is not possible the function fitting methods (AG and DL provide the most robust description of the seasonal dynamics.

  14. Comparison of fractional splines with polynomial splines; An Application on under-five year’s child mortality data in Pakistan (1960-2012

    Directory of Open Access Journals (Sweden)

    Saira Esar Esar

    2017-06-01

    Full Text Available Cubic splines are commonly used for capturing the changes in economic analysis. This is because of the fact that traditional regression including polynomial regression fail to capture the underlying changes in the corresponding response variables. Moreover, these variables do not change monotonically, i.e. there are discontinuities in the trend of these variables over a period of time. The objective of this research is to explain the movement of under-five child mortality in Pakistan over the past few decades through a combination of statistical techniques. While cubic splines explain the movement of under-five child mortality to a large extent, we cannot deny the possibility that splines with fractional powers might better explain the underlying movement. . Hence, we estimated the value of fractional power by nonlinear regression method and used it to develop the fractional splines. Although, the fractional spline model may have the potential to improve upon the cubic spline model, it does not demonstrate a real improvement in results of this case, but, perhaps, with a different data set.

  15. Univariate Cubic L1 Interpolating Splines: Analytical Results for Linearity, Convexity and Oscillation on 5-PointWindows

    Directory of Open Access Journals (Sweden)

    Shu-Cherng Fang

    2010-07-01

    Full Text Available We analytically investigate univariate C1 continuous cubic L1 interpolating splines calculated by minimizing an L1 spline functional based on the second derivative on 5-point windows. Specifically, we link geometric properties of the data points in the windows with linearity, convexity and oscillation properties of the resulting L1 spline. These analytical results provide the basis for a computationally efficient algorithm for calculation of L1 splines on 5-point windows.

  16. Bayesian methods for hackers probabilistic programming and Bayesian inference

    CERN Document Server

    Davidson-Pilon, Cameron

    2016-01-01

    Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...

  17. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  18. Bayesian statistical inference

    Directory of Open Access Journals (Sweden)

    Bruno De Finetti

    2017-04-01

    Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.

  19. Smooth functions statistics

    International Nuclear Information System (INIS)

    Arnold, V.I.

    2006-03-01

    To describe the topological structure of a real smooth function one associates to it the graph, formed by the topological variety, whose points are the connected components of the level hypersurface of the function. For a Morse function, such a graph is a tree. Generically, it has T triple vertices, T + 2 endpoints, 2T + 2 vertices and 2T + 1 arrows. The main goal of the present paper is to study the statistics of the graphs, corresponding to T triple points: what is the growth rate of the number φ(T) of different graphs? Which part of these graphs is representable by the polynomial functions of corresponding degree? A generic polynomial of degree n has at most (n - 1) 2 critical points on R 2 , corresponding to 2T + 2 = (n - 1) 2 + 1, that is to T = 2k(k - 1) saddle-points for degree n = 2k

  20. Bayesian optimization for materials science

    CERN Document Server

    Packwood, Daniel

    2017-01-01

    This book provides a short and concise introduction to Bayesian optimization specifically for experimental and computational materials scientists. After explaining the basic idea behind Bayesian optimization and some applications to materials science in Chapter 1, the mathematical theory of Bayesian optimization is outlined in Chapter 2. Finally, Chapter 3 discusses an application of Bayesian optimization to a complicated structure optimization problem in computational surface science. Bayesian optimization is a promising global optimization technique that originates in the field of machine learning and is starting to gain attention in materials science. For the purpose of materials design, Bayesian optimization can be used to predict new materials with novel properties without extensive screening of candidate materials. For the purpose of computational materials science, Bayesian optimization can be incorporated into first-principles calculations to perform efficient, global structure optimizations. While re...

  1. Adaptive Basis Selection for Exponential Family Smoothing Splines with Application in Joint Modeling of Multiple Sequencing Samples

    OpenAIRE

    Ma, Ping; Zhang, Nan; Huang, Jianhua Z.; Zhong, Wenxuan

    2017-01-01

    Second-generation sequencing technologies have replaced array-based technologies and become the default method for genomics and epigenomics analysis. Second-generation sequencing technologies sequence tens of millions of DNA/cDNA fragments in parallel. After the resulting sequences (short reads) are mapped to the genome, one gets a sequence of short read counts along the genome. Effective extraction of signals in these short read counts is the key to the success of sequencing technologies. No...

  2. Random regression models using Legendre polynomials or linear splines for test-day milk yield of dairy Gyr (Bos indicus) cattle.

    Science.gov (United States)

    Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G

    2013-01-01

    Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Budi I. Setiawan

    2007-09-01

    Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.

  4. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    Science.gov (United States)

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  5. Intensity-based hierarchical elastic registration using approximating splines.

    Science.gov (United States)

    Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C

    2014-01-01

    We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS

  6. Inference of Gene Regulatory Networks Using Bayesian Nonparametric Regression and Topology Information

    Science.gov (United States)

    2017-01-01

    Gene regulatory networks (GRNs) play an important role in cellular systems and are important for understanding biological processes. Many algorithms have been developed to infer the GRNs. However, most algorithms only pay attention to the gene expression data but do not consider the topology information in their inference process, while incorporating this information can partially compensate for the lack of reliable expression data. Here we develop a Bayesian group lasso with spike and slab priors to perform gene selection and estimation for nonparametric models. B-spline basis functions are used to capture the nonlinear relationships flexibly and penalties are used to avoid overfitting. Further, we incorporate the topology information into the Bayesian method as a prior. We present the application of our method on DREAM3 and DREAM4 datasets and two real biological datasets. The results show that our method performs better than existing methods and the topology information prior can improve the result. PMID:28133490

  7. Inference of Gene Regulatory Networks Using Bayesian Nonparametric Regression and Topology Information

    Directory of Open Access Journals (Sweden)

    Yue Fan

    2017-01-01

    Full Text Available Gene regulatory networks (GRNs play an important role in cellular systems and are important for understanding biological processes. Many algorithms have been developed to infer the GRNs. However, most algorithms only pay attention to the gene expression data but do not consider the topology information in their inference process, while incorporating this information can partially compensate for the lack of reliable expression data. Here we develop a Bayesian group lasso with spike and slab priors to perform gene selection and estimation for nonparametric models. B-spline basis functions are used to capture the nonlinear relationships flexibly and penalties are used to avoid overfitting. Further, we incorporate the topology information into the Bayesian method as a prior. We present the application of our method on DREAM3 and DREAM4 datasets and two real biological datasets. The results show that our method performs better than existing methods and the topology information prior can improve the result.

  8. Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System

    Directory of Open Access Journals (Sweden)

    Guang Zhao

    2008-01-01

    Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.

  9. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...... in a Matlab toolbox, is demonstrated for non-negative decompositions and compared with non-negative matrix factorization.......In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  10. Bayesian coronal seismology

    Science.gov (United States)

    Arregui, Iñigo

    2018-01-01

    In contrast to the situation in a laboratory, the study of the solar atmosphere has to be pursued without direct access to the physical conditions of interest. Information is therefore incomplete and uncertain and inference methods need to be employed to diagnose the physical conditions and processes. One of such methods, solar atmospheric seismology, makes use of observed and theoretically predicted properties of waves to infer plasma and magnetic field properties. A recent development in solar atmospheric seismology consists in the use of inversion and model comparison methods based on Bayesian analysis. In this paper, the philosophy and methodology of Bayesian analysis are first explained. Then, we provide an account of what has been achieved so far from the application of these techniques to solar atmospheric seismology and a prospect of possible future extensions.

  11. Bayesian community detection.

    Science.gov (United States)

    Mørup, Morten; Schmidt, Mikkel N

    2012-09-01

    Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled.

  12. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  13. Bayesian Hypothesis Testing

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Stephen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sigeti, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-15

    These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H0.

  14. Bayesian networks in reliability

    Energy Technology Data Exchange (ETDEWEB)

    Langseth, Helge [Department of Mathematical Sciences, Norwegian University of Science and Technology, N-7491 Trondheim (Norway)]. E-mail: helgel@math.ntnu.no; Portinale, Luigi [Department of Computer Science, University of Eastern Piedmont ' Amedeo Avogadro' , 15100 Alessandria (Italy)]. E-mail: portinal@di.unipmn.it

    2007-01-15

    Over the last decade, Bayesian networks (BNs) have become a popular tool for modelling many kinds of statistical problems. We have also seen a growing interest for using BNs in the reliability analysis community. In this paper we will discuss the properties of the modelling framework that make BNs particularly well suited for reliability applications, and point to ongoing research that is relevant for practitioners in reliability.

  15. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...... economics, with careful controls for the confounding effects of risk aversion. Our results show that risk aversion significantly alters inferences on deviations from Bayes’ Rule....

  16. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  17. Cubic spline function interpolation in atmosphere models for the software development laboratory: Formulation and data

    Science.gov (United States)

    Kirkpatrick, J. C.

    1976-01-01

    A tabulation of selected altitude-correlated values of pressure, density, speed of sound, and coefficient of viscosity for each of six models of the atmosphere is presented in block data format. Interpolation for the desired atmospheric parameters is performed by using cubic spline functions. The recursive relations necessary to compute the cubic spline function coefficients are derived and implemented in subroutine form. Three companion subprograms, which form the preprocessor and processor, are also presented. These subprograms, together with the data element, compose the spline fit atmosphere package. Detailed FLOWGM flow charts and FORTRAN listings of the atmosphere package are presented in the appendix.

  18. Higher-order numerical solutions using cubic splines. [for partial differential equations

    Science.gov (United States)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    A cubic spline collocation procedure has recently been developed for the numerical solution of partial differential equations. In the present paper, this spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a non-uniform mesh and overall fourth-order accuracy for a uniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, will be presented for several model problems.-

  19. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  20. Smoothness in Binomial Edge Ideals

    Directory of Open Access Journals (Sweden)

    Hamid Damadi

    2016-06-01

    Full Text Available In this paper we study some geometric properties of the algebraic set associated to the binomial edge ideal of a graph. We study the singularity and smoothness of the algebraic set associated to the binomial edge ideal of a graph. Some of these algebraic sets are irreducible and some of them are reducible. If every irreducible component of the algebraic set is smooth we call the graph an edge smooth graph, otherwise it is called an edge singular graph. We show that complete graphs are edge smooth and introduce two conditions such that the graph G is edge singular if and only if it satisfies these conditions. Then, it is shown that cycles and most of trees are edge singular. In addition, it is proved that complete bipartite graphs are edge smooth.

  1. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy

  2. Bayesian analysis in plant pathology.

    Science.gov (United States)

    Mila, A L; Carriquiry, A L

    2004-09-01

    ABSTRACT Bayesian methods are currently much discussed and applied in several disciplines from molecular biology to engineering. Bayesian inference is the process of fitting a probability model to a set of data and summarizing the results via probability distributions on the parameters of the model and unobserved quantities such as predictions for new observations. In this paper, after a short introduction of Bayesian inference, we present the basic features of Bayesian methodology using examples from sequencing genomic fragments and analyzing microarray gene-expressing levels, reconstructing disease maps, and designing experiments.

  3. Numerical solution of system of boundary value problems using B-spline with free parameter

    Science.gov (United States)

    Gupta, Yogesh

    2017-01-01

    This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.

  4. Numerical treatment of Hunter Saxton equation using cubic trigonometric B-spline collocation method

    Science.gov (United States)

    Hashmi, M. S.; Awais, Muhammad; Waheed, Ammarah; Ali, Qutab

    2017-09-01

    In this article, authors proposed a computational model based on cubic trigonometric B-spline collocation method to solve Hunter Saxton equation. The nonlinear second order partial differential equation arises in modeling of nematic liquid crystals and describes some aspects of orientation wave. The problem is decomposed into system of linear equations using cubic trigonometric B-spline collocation method with quasilinearization. To show the efficiency of the proposed method, two numerical examples have been tested for different values of t. The results are described using error tables and graphs and compared with the results existed in literature. It is evident that results are in good agreement with analytical solution and better than Arbabi, Nazari, and Davishi, Optik 127, 5255-5258 (2016). In current problem, it is also observed that the cubic trigonometric B-spline gives better results as compared to cubic B-spline.

  5. Cubic B-spline solution for two-point boundary value problem with AOR iterative method

    Science.gov (United States)

    Suardi, M. N.; Radzuan, N. Z. F. M.; Sulaiman, J.

    2017-09-01

    In this study, the cubic B-spline approximation equation has been derived by using the cubic B-spline discretization scheme to solve two-point boundary value problems. In addition to that, system of cubic B-spline approximation equations is generated from this spline approximation equation in order to get the numerical solutions. To do this, the Accelerated Over Relaxation (AOR) iterative method has been used to solve the generated linear system. For the purpose of comparison, the GS iterative method is designated as a control method to compare between SOR and AOR iterative methods. There are two examples of proposed problems that have been considered to examine the efficiency of these proposed iterative methods via three parameters such as their number of iterations, computational time and maximum absolute error. The numerical results are obtained from these iterative methods, it can be concluded that the AOR iterative method is slightly efficient as compared with SOR iterative method.

  6. SPLINE-FUNCTIONS IN THE TASK OF THE FLOW AIRFOIL PROFILE

    Directory of Open Access Journals (Sweden)

    Mikhail Lopatjuk

    2013-12-01

    Full Text Available The method and the algorithm of solving the problem of streamlining are presented. Neumann boundary problem is reduced to the solution of integral equations with given boundary conditions using the cubic spline-functions

  7. Vibration Analysis of Suspension Cable with Attached Masses by Non-linear Spline Function Method

    Directory of Open Access Journals (Sweden)

    Qin Jian

    2016-01-01

    Full Text Available The nonlinear strain and stress expressions of suspension cable are established from the basic condition of suspension structure on the Lagrange coordinates and the equilibrium equation of the suspension structure is obtained. The dynamics equations of motion of the suspended cable with attached masses are proposed according to the virtual work principle. Using the spline function as interpolation functions of displacement and spatial position, the spline function method of dynamics equation of suspension cable is formed in which the stiffness matrix is expressed by spline function, and the solution method of stiffness matrix, matrix assembly method based on spline integral, is put forwards which can save cost time efficiency. The vibration frequency of the suspension cable is calculated with different attached masses, which provides theoretical basis for valuing of safety coefficient of the bearing cable of the cableway.

  8. Acoustic Emission Signatures of Fatigue Damage in Idealized Bevel Gear Spline for Localized Sensing

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2017-06-01

    Full Text Available In many rotating machinery applications, such as helicopters, the splines of an externally-splined steel shaft that emerges from the gearbox engage with the reverse geometry of an internally splined driven shaft for the delivery of power. The splined section of the shaft is a critical and non-redundant element which is prone to cracking due to complex loading conditions. Thus, early detection of flaws is required to prevent catastrophic failures. The acoustic emission (AE method is a direct way of detecting such active flaws, but its application to detect flaws in a splined shaft in a gearbox is difficult due to the interference of background noise and uncertainty about the effects of the wave propagation path on the received AE signature. Here, to model how AE may detect fault propagation in a hollow cylindrical splined shaft, the splined section is essentially unrolled into a metal plate of the same thickness as the cylinder wall. Spline ridges are cut into this plate, a through-notch is cut perpendicular to the spline to model fatigue crack initiation, and tensile cyclic loading is applied parallel to the spline to propagate the crack. In this paper, the new piezoelectric sensor array is introduced with the purpose of placing them within the gearbox to minimize the wave propagation path. The fatigue crack growth of a notched and flattened gearbox spline component is monitored using a new piezoelectric sensor array and conventional sensors in a laboratory environment with the purpose of developing source models and testing the new sensor performance. The AE data is continuously collected together with strain gauges strategically positioned on the structure. A significant amount of continuous emission due to the plastic deformation accompanied with the crack growth is observed. The frequency spectra of continuous emissions and burst emissions are compared to understand the differences of plastic deformation and sudden crack jump. The

  9. Effects of slope smoothing in river channel modeling

    Science.gov (United States)

    Kim, Kyungmin; Liu, Frank; Hodges, Ben R.

    2017-04-01

    In extending dynamic river modeling with the 1D Saint-Venant equations from a single reach to a large watershed there are critical questions as to how much bathymetric knowledge is necessary and how it should be represented parsimoniously. The ideal model will include the detail necessary to provide realism, but not include extraneous detail that should not exert a control on a 1D (cross-section averaged) solution. In a Saint-Venant model, the overall complexity of the river channel morphometry is typically abstracted into metrics for the channel slope, cross-sectional area, hydraulic radius, and roughness. In stream segments where cross-section surveys are closely spaced, it is not uncommon to have sharp changes in slope or even negative values (where a positive slope is the downstream direction). However, solving river flow with the Saint-Venant equations requires a degree of smoothness in the equation parameters or the equation set with the directly measured channel slopes may not be Lipschitz continuous. The results of non-smoothness are typically extended computational time to converge solutions (or complete failure to converge) and/or numerical instabilities under transient conditions. We have investigated using cubic splines to smooth the bottom slope and ensure always positive reference slopes within a 1D model. This method has been implemented in the Simulation Program for River Networks (SPRNT) and is compared to the standard HEC-RAS river solver. It is shown that the reformulation of the reference slope is both in keeping with the underlying derivation of the Saint-Venant equations and provides practical numerical stability without altering the realism of the simulation. This research was supported in part by the National Science Foundation under grant number CCF-1331610.

  10. Smooth individual level covariates adjustment in disease mapping.

    Science.gov (United States)

    Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise

    2018-03-25

    Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Numerical Solutions for Convection-Diffusion Equation through Non-Polynomial Spline

    Directory of Open Access Journals (Sweden)

    Ravi Kanth A.S.V.

    2016-01-01

    Full Text Available In this paper, numerical solutions for convection-diffusion equation via non-polynomial splines are studied. We purpose an implicit method based on non-polynomial spline functions for solving the convection-diffusion equation. The method is proven to be unconditionally stable by using Von Neumann technique. Numerical results are illustrated to demonstrate the efficiency and stability of the purposed method.

  12. Very smooth points of spaces of operators

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    ball has a very smooth point then the space has the Radon–Nikodým property. We give an example of a smooth Banach space without any very smooth points. Keywords. Very smooth points; spaces of operators; M-ideals. 1. Introduction. A Banach space X is said to be very smooth if every unit vector has a unique norming.

  13. Approximate Bayesian MLP regularization for regression in the presence of noise.

    Science.gov (United States)

    Park, Jung-Guk; Jo, Sungho

    2016-11-01

    We present a novel regularization method for a multilayer perceptron (MLP) that learns a regression function in the presence of noise regardless of how smooth the function is. Unlike general MLP regularization methods assuming that a regression function is smooth, the proposed regularization method is also valid when a regression function has discontinuities (non-smoothness). Since a true regression function to be learned is unknown, we examine a training set with our Bayesian approach that identifies non-smooth data, analyzing discontinuities in a regression function. The use of a Bayesian probability distribution identifies the non-smooth data. These identified data is used in a proposed objective function to fit an MLP response to the desired regression function regardless of its smoothness and noise. Experimental simulations show that the MLP with our presented training method yields more accurate fits to non-smooth functions than other MLP training methods. Further, we show that the suggested training methodology can be incorporated with deep learning models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Smooth analysis in Banach spaces

    CERN Document Server

    Hájek, Petr

    2014-01-01

    This bookis aboutthe subject of higher smoothness in separable real Banach spaces.It brings together several angles of view on polynomials, both in finite and infinite setting.Also a rather thorough and systematic view of the more recent results, and the authors work is given. The book revolves around two main broad questions: What is the best smoothness of a given Banach space, and its structural consequences? How large is a supply of smooth functions in the sense of approximating continuous functions in the uniform topology, i.e. how does the Stone-Weierstrass theorem generalize into in

  15. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    Science.gov (United States)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  16. Random regression analyses using B-splines to model growth of Australian Angus cattle

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2005-09-01

    Full Text Available Abstract Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error.

  17. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    Science.gov (United States)

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  18. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  19. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  20. Micropolar Fluids Using B-spline Divergence Conforming Spaces

    KAUST Repository

    Sarmiento, Adel

    2014-06-06

    We discretized the two-dimensional linear momentum, microrotation, energy and mass conservation equations from micropolar fluids theory, with the finite element method, creating divergence conforming spaces based on B-spline basis functions to obtain pointwise divergence free solutions [8]. Weak boundary conditions were imposed using Nitsche\\'s method for tangential conditions, while normal conditions were imposed strongly. Once the exact mass conservation was provided by the divergence free formulation, we focused on evaluating the differences between micropolar fluids and conventional fluids, to show the advantages of using the micropolar fluid model to capture the features of complex fluids. A square and an arc heat driven cavities were solved as test cases. A variation of the parameters of the model, along with the variation of Rayleigh number were performed for a better understanding of the system. The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.

  1. Classification using Bayesian neural nets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); O. van der Meer; R. Potharst (Rob)

    1995-01-01

    textabstractRecently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to

  2. Bayesian Data Analysis (lecture 1)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    framework but we will also go into more detail and discuss for example the role of the prior. The second part of the lecture will cover further examples and applications that heavily rely on the bayesian approach, as well as some computational tools needed to perform a bayesian analysis.

  3. Bayesian Data Analysis (lecture 2)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    framework but we will also go into more detail and discuss for example the role of the prior. The second part of the lecture will cover further examples and applications that heavily rely on the bayesian approach, as well as some computational tools needed to perform a bayesian analysis.

  4. The Bayesian Covariance Lasso.

    Science.gov (United States)

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  5. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  6. Bayesian inference with ecological applications

    CERN Document Server

    Link, William A

    2009-01-01

    This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...

  7. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  8. Bayesian tomography and integrated data analysis in fusion diagnostics

    Science.gov (United States)

    Li, Dong; Dong, Y. B.; Deng, Wei; Shi, Z. B.; Fu, B. Z.; Gao, J. M.; Wang, T. B.; Zhou, Yan; Liu, Yi; Yang, Q. W.; Duan, X. R.

    2016-11-01

    In this article, a Bayesian tomography method using non-stationary Gaussian process for a prior has been introduced. The Bayesian formalism allows quantities which bear uncertainty to be expressed in the probabilistic form so that the uncertainty of a final solution can be fully resolved from the confidence interval of a posterior probability. Moreover, a consistency check of that solution can be performed by checking whether the misfits between predicted and measured data are reasonably within an assumed data error. In particular, the accuracy of reconstructions is significantly improved by using the non-stationary Gaussian process that can adapt to the varying smoothness of emission distribution. The implementation of this method to a soft X-ray diagnostics on HL-2A has been used to explore relevant physics in equilibrium and MHD instability modes. This project is carried out within a large size inference framework, aiming at an integrated analysis of heterogeneous diagnostics.

  9. Performance evaluation of block-diagonal preconditioners for the divergence-conforming B-spline discretization of the Stokes system

    KAUST Repository

    Côrtes, A.M.A.

    2015-02-20

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf-sup stable and pointwise divergence-free. When applied to discretized Stokes equations, these spaces generate a symmetric and indefinite saddle-point linear system. Krylov subspace methods are usually the most efficient procedures to solve such systems. One of such methods, for symmetric systems, is the Minimum Residual Method (MINRES). However, the efficiency and robustness of Krylov subspace methods is closely tied to appropriate preconditioning strategies. For the discrete Stokes system, in particular, block-diagonal strategies provide efficient preconditioners. In this article, we compare the performance of block-diagonal preconditioners for several block choices. We verify how the eigenvalue clustering promoted by the preconditioning strategies affects MINRES convergence. We also compare the number of iterations and wall-clock timings. We conclude that among the building blocks we tested, the strategy with relaxed inner conjugate gradients preconditioned with incomplete Cholesky provided the best results.

  10. TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation.

    Science.gov (United States)

    Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang

    2010-02-01

    We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  11. Modeling positional effects of regulatory sequences with spline transformations increases prediction accuracy of deep neural networks.

    Science.gov (United States)

    Avsec, Žiga; Barekatain, Mohammadamin; Cheng, Jun; Gagneur, Julien

    2017-11-16

    Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries, or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at goo.gl/3yMY5w. avsec@in.tum.de; gagneur@in.tum.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  12. Evaluation of the spline reconstruction technique for PET

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Fernández, Yolanda [Centre d’Imatge Molecular Experimental (CIME), CETIR-ERESA, Barcelona 08950 (Spain); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London NW1 2BU (United Kingdom); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA (United Kingdom)

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real

  13. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    OpenAIRE

    He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han

    2015-01-01

    Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...

  14. An evaluation of prefiltered B-spline reconstruction for quasi-interpolation on the Body-Centered Cubic lattice.

    Science.gov (United States)

    Csébfalvi, Balázs

    2010-01-01

    In this paper, we demonstrate that quasi-interpolation of orders two and four can be efficiently implemented on the Body-Centered Cubic (BCC) lattice by using tensor-product B-splines combined with appropriate discrete prefilters. Unlike the nonseparable box-spline reconstruction previously proposed for the BCC lattice, the prefiltered B-spline reconstruction can utilize the fast trilinear texture-fetching capability of the recent graphics cards. Therefore, it can be applied for rendering BCC-sampled volumetric data interactively. Furthermore, we show that a separable B-spline filter can suppress the postaliasing effect much more isotropically than a nonseparable box-spline filter of the same approximation power. Although prefilters that make the B-splines interpolating on the BCC lattice do not exist, we demonstrate that quasi-interpolating prefiltered linear and cubic B-spline reconstructions can still provide similar or higher image quality than the interpolating linear box-spline and prefiltered quintic box-spline reconstructions, respectively.

  15. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  16. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  17. Bayesian supervised dimensionality reduction.

    Science.gov (United States)

    Gönen, Mehmet

    2013-12-01

    Dimensionality reduction is commonly used as a preprocessing step before training a supervised learner. However, coupled training of dimensionality reduction and supervised learning steps may improve the prediction performance. In this paper, we introduce a simple and novel Bayesian supervised dimensionality reduction method that combines linear dimensionality reduction and linear supervised learning in a principled way. We present both Gibbs sampling and variational approximation approaches to learn the proposed probabilistic model for multiclass classification. We also extend our formulation toward model selection using automatic relevance determination in order to find the intrinsic dimensionality. Classification experiments on three benchmark data sets show that the new model significantly outperforms seven baseline linear dimensionality reduction algorithms on very low dimensions in terms of generalization performance on test data. The proposed model also obtains the best results on an image recognition task in terms of classification and retrieval performances.

  18. Bayesian Geostatistical Design

    DEFF Research Database (Denmark)

    Diggle, Peter; Lophaven, Søren Nymand

    2006-01-01

    This paper describes the use of model-based geostatistics for choosing the set of sampling locations, collectively called the design, to be used in a geostatistical analysis. Two types of design situation are considered. These are retrospective design, which concerns the addition of sampling...... locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...... parameter values are unknown. The results show that in this situation a wide range of interpoint distances should be included in the design, and the widely used regular design is often not the best choice....

  19. Bayesian Age-Period-Cohort Modeling and Prediction - BAMP

    Directory of Open Access Journals (Sweden)

    Volker J. Schmid

    2007-10-01

    Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.

  20. Ensemble Kalman filtering with one-step-ahead smoothing

    KAUST Repository

    Raboudi, Naila F.

    2018-01-11

    The ensemble Kalman filter (EnKF) is widely used for sequential data assimilation. It operates as a succession of forecast and analysis steps. In realistic large-scale applications, EnKFs are implemented with small ensembles and poorly known model error statistics. This limits their representativeness of the background error covariances and, thus, their performance. This work explores the efficiency of the one-step-ahead (OSA) smoothing formulation of the Bayesian filtering problem to enhance the data assimilation performance of EnKFs. Filtering with OSA smoothing introduces an updated step with future observations, conditioning the ensemble sampling with more information. This should provide an improved background ensemble in the analysis step, which may help to mitigate the suboptimal character of EnKF-based methods. Here, the authors demonstrate the efficiency of a stochastic EnKF with OSA smoothing for state estimation. They then introduce a deterministic-like EnKF-OSA based on the singular evolutive interpolated ensemble Kalman (SEIK) filter. The authors show that the proposed SEIK-OSA outperforms both SEIK, as it efficiently exploits the data twice, and the stochastic EnKF-OSA, as it avoids observational error undersampling. They present extensive assimilation results from numerical experiments conducted with the Lorenz-96 model to demonstrate SEIK-OSA’s capabilities.

  1. LD-Spline: Mapping SNPs on genotyping platforms to genomic regions using patterns of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Bush William S

    2009-12-01

    Full Text Available Abstract Background Gene-centric analysis tools for genome-wide association study data are being developed both to annotate single locus statistics and to prioritize or group single nucleotide polymorphisms (SNPs prior to analysis. These approaches require knowledge about the relationships between SNPs on a genotyping platform and genes in the human genome. SNPs in the genome can represent broader genomic regions via linkage disequilibrium (LD, and population-specific patterns of LD can be exploited to generate a data-driven map of SNPs to genes. Methods In this study, we implemented LD-Spline, a database routine that defines the genomic boundaries a particular SNP represents using linkage disequilibrium statistics from the International HapMap Project. We compared the LD-Spline haplotype block partitioning approach to that of the four gamete rule and the Gabriel et al. approach using simulated data; in addition, we processed two commonly used genome-wide association study platforms. Results We illustrate that LD-Spline performs comparably to the four-gamete rule and the Gabriel et al. approach; however as a SNP-centric approach LD-Spline has the added benefit of systematically identifying a genomic boundary for each SNP, where the global block partitioning approaches may falter due to sampling variation in LD statistics. Conclusion LD-Spline is an integrated database routine that quickly and effectively defines the genomic region marked by a SNP using linkage disequilibrium, with a SNP-centric block definition algorithm.

  2. Stabilized Discretization in Spline Element Method for Solution of Two-Dimensional Navier-Stokes Problems

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2014-01-01

    Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.

  3. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Errichello, Robert [GEARTECH, Houston, TX (United States); Halse, Chris [Romax Technology, Nottingham (United Kingdom)

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  4. Ozone profile smoothness as a priori information in the inversion of limb measurements

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2004-11-01

    Full Text Available In this work we discuss inclusion of a priori information about the smoothness of atmospheric profiles in inversion algorithms. The smoothness requirement can be formulated in the form of Tikhonov-type regularization, where the smoothness of atmospheric profiles is considered as a constraint or in the form of Bayesian optimal estimation (maximum a posteriori method, MAP, where the smoothness of profiles can be included as a priori information. We develop further two recently proposed retrieval methods. One of them - Tikhonov-type regularization according to the target resolution - develops the classical Tikhonov regularization. The second method - maximum a posteriori method with smoothness a priori - effectively combines the ideas of the classical MAP method and Tikhonov-type regularization. We discuss a grid-independent formulation for the proposed inversion methods, thus isolating the choice of calculation grid from the question of how strong the smoothing should be. The discussed approaches are applied to the problem of ozone profile retrieval from stellar occultation measurements by the GOMOS instrument on board the Envisat satellite. Realistic simulations for the typical measurement conditions with smoothness a priori information created from 10-years analysis of ozone sounding at Sodankylä and analysis of the total retrieval error illustrate the advantages of the proposed methods. The proposed methods are equally applicable to other profile retrieval problems from remote sensing measurements.

  5. Ozone profile smoothness as a priori information in the inversion of limb measurements

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2004-11-01

    Full Text Available In this work we discuss inclusion of a priori information about the smoothness of atmospheric profiles in inversion algorithms. The smoothness requirement can be formulated in the form of Tikhonov-type regularization, where the smoothness of atmospheric profiles is considered as a constraint or in the form of Bayesian optimal estimation (maximum a posteriori method, MAP, where the smoothness of profiles can be included as a priori information. We develop further two recently proposed retrieval methods. One of them - Tikhonov-type regularization according to the target resolution - develops the classical Tikhonov regularization. The second method - maximum a posteriori method with smoothness a priori - effectively combines the ideas of the classical MAP method and Tikhonov-type regularization. We discuss a grid-independent formulation for the proposed inversion methods, thus isolating the choice of calculation grid from the question of how strong the smoothing should be.

    The discussed approaches are applied to the problem of ozone profile retrieval from stellar occultation measurements by the GOMOS instrument on board the Envisat satellite. Realistic simulations for the typical measurement conditions with smoothness a priori information created from 10-years analysis of ozone sounding at Sodankylä and analysis of the total retrieval error illustrate the advantages of the proposed methods.

    The proposed methods are equally applicable to other profile retrieval problems from remote sensing measurements.

  6. Bayesian adaptive methods for clinical trials

    CERN Document Server

    Berry, Scott M; Muller, Peter

    2010-01-01

    Already popular in the analysis of medical device trials, adaptive Bayesian designs are increasingly being used in drug development for a wide variety of diseases and conditions, from Alzheimer's disease and multiple sclerosis to obesity, diabetes, hepatitis C, and HIV. Written by leading pioneers of Bayesian clinical trial designs, Bayesian Adaptive Methods for Clinical Trials explores the growing role of Bayesian thinking in the rapidly changing world of clinical trial analysis. The book first summarizes the current state of clinical trial design and analysis and introduces the main ideas and potential benefits of a Bayesian alternative. It then gives an overview of basic Bayesian methodological and computational tools needed for Bayesian clinical trials. With a focus on Bayesian designs that achieve good power and Type I error, the next chapters present Bayesian tools useful in early (Phase I) and middle (Phase II) clinical trials as well as two recent Bayesian adaptive Phase II studies: the BATTLE and ISP...

  7. Satellite Video Point-target Tracking in Combination with Motion Smoothness Constraint and Grayscale Feature

    Directory of Open Access Journals (Sweden)

    WU Jiaqi

    2017-09-01

    Full Text Available In view of the problem of satellite video point-target tracking, a method of Bayesian classification for tracking with the constraint of motion smoothness is proposed, which named Bayesian MoST. The idea of naive Bayesian classification without relying on any prior probability of target is introduced. Under the constraint of motion smoothness, the gray level similarity feature is used to describe the likelihood of the target. And then, the simplified conditional probability correction model of classifier is created according to the independence assumption Bayes theorem. Afterwards, the tracking target position can be determined by estimating the target posterior probability on the basis of the model. Meanwhile, the Kalman filter, an assistance and optimization method, is used to enhance the robustness of tracking processing. The theoretical method proposed are validated in a number of six experiments using SkySat and JL1H video, each has two segments. The experiment results show that the BMoST method proposed have good performance, the tracking precision is about 90% and tracking trajectory is smoothing. The method could satisfy the needs of the following advanced treatment in satellite video.

  8. Chaotic behaviour from smooth and non-smooth optical solitons ...

    Indian Academy of Sciences (India)

    2016-07-14

    Jul 14, 2016 ... obtain the preferable media to reduce the influ- ence of perturbation of solitons in optical fibre propagation. This paper is organized as follows. In §2, we give the smooth and compacton solitons of the perturbation system by phase diagram analysis. In §3, we discuss the chaotic behaviour of the perturbed ...

  9. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  10. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  11. BSR: B-spline atomic R-matrix codes

    Science.gov (United States)

    Zatsarinny, Oleg

    2006-02-01

    BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput

  12. Bayesian Inference: with ecological applications

    Science.gov (United States)

    Link, William A.; Barker, Richard J.

    2010-01-01

    This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.

  13. Bayesian image restoration, using configurations

    OpenAIRE

    Thorarinsdottir, Thordis

    2006-01-01

    In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the re...

  14. Smoothing and projecting age-specific probabilities of death by TOPALS

    Directory of Open Access Journals (Sweden)

    Joop de Beer

    2012-10-01

    Full Text Available BACKGROUND TOPALS is a new relational model for smoothing and projecting age schedules. The model is operationally simple, flexible, and transparent. OBJECTIVE This article demonstrates how TOPALS can be used for both smoothing and projecting age-specific mortality for 26 European countries and compares the results of TOPALS with those of other smoothing and projection methods. METHODS TOPALS uses a linear spline to describe the ratios between the age-specific death probabilities of a given country and a standard age schedule. For smoothing purposes I use the average of death probabilities over 15 Western European countries as standard, whereas for projection purposes I use an age schedule of 'best practice' mortality. A partial adjustment model projects how quickly the death probabilities move in the direction of the best-practice level of mortality. RESULTS On average, TOPALS performs better than the Heligman-Pollard model and the Brass relational method in smoothing mortality age schedules. TOPALS can produce projections that are similar to those of the Lee-Carter method, but can easily be used to produce alternative scenarios as well. This article presents three projections of life expectancy at birth for the year 2060 for 26 European countries. The Baseline scenario assumes a continuation of the past trend in each country, the Convergence scenario assumes that there is a common trend across European countries, and the Acceleration scenario assumes that the future decline of death probabilities will exceed that in the past. The Baseline scenario projects that average European life expectancy at birth will increase to 80 years for men and 87 years for women in 2060, whereas the Acceleration scenario projects an increase to 90 and 93 years respectively. CONCLUSIONS TOPALS is a useful new tool for demographers for both smoothing age schedules and making scenarios.

  15. T-Spline Based Unifying Registration Procedure for Free-Form Surface Workpieces in Intelligent CMM

    Directory of Open Access Journals (Sweden)

    Zhenhua Han

    2017-10-01

    Full Text Available With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs. To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs.

  16. Modeling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines

    Science.gov (United States)

    Kammann, P.

    2005-12-01

    Our intention is the modeling of seismic wave propagation from displacement measurements by seismographs at the Earth's surface. The elastic behaviour of the Earth is usually described by the Cauchy-Navier equation. A system of fundamental solutions for the Fourier transformed Cauchy-Navier equation are the Hansen vectors L, M and N. We apply an inverse Fourier transform to obtain an orthonormal function system depending on time and space. By means of this system we construct certain splines, which are then used for interpolating the given data. Compared to polynomial interpolation, splines have the advantage that they minimize some curvature measure and are, therefore, smoother. First, we test this method on a synthetic wave function. Afterwards, we apply it to realistic earthquake data. (P. Kammann, Modelling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines, Diploma Thesis, Geomathematics Group, Department of Mathematics, University of Kaiserslautern, 2005)

  17. Cubic spline interpolation of functions with high gradients in boundary layers

    Science.gov (United States)

    Blatov, I. A.; Zadorin, A. I.; Kitaeva, E. V.

    2017-01-01

    The cubic spline interpolation of grid functions with high-gradient regions is considered. Uniform meshes are proved to be inefficient for this purpose. In the case of widely applied piecewise uniform Shishkin meshes, asymptotically sharp two-sided error estimates are obtained in the class of functions with an exponential boundary layer. It is proved that the error estimates of traditional spline interpolation are not uniform with respect to a small parameter, and the error can increase indefinitely as the small parameter tends to zero, while the number of nodes N is fixed. A modified cubic interpolation spline is proposed, for which O((ln N/N)4) error estimates that are uniform with respect to the small parameter are obtained.

  18. B-spline design of digital FIR filter using evolutionary computation techniques

    Science.gov (United States)

    Swain, Manorama; Panda, Rutuparna

    2011-10-01

    In the forth coming era, digital filters are becoming a true replacement for the analog filter designs. Here in this paper we examine a design method for FIR filter using global search optimization techniques known as Evolutionary computation via genetic algorithm and bacterial foraging, where the filter design considered as an optimization problem. In this paper, an effort is made to design the maximally flat filters using generalized B-spline window. The key to our success is the fact that the bandwidth of the filer response can be modified by changing tuning parameters incorporated well within the B-spline function. This is an optimization problem. Direct approach has been deployed to design B-spline window based FIR digital filters. Four parameters (order, width, length and tuning parameter) have been optimized by using GA and EBFS. It is observed that the desired response can be obtained with lower order FIR filters with optimal width and tuning parameters.

  19. Development of Technology Parameter Towards Shipbuilding Productivity Predictor Using Cubic Spline Approach

    Directory of Open Access Journals (Sweden)

    Bagiyo Suwasono

    2011-05-01

    Full Text Available Ability of production processes associated with state-of-the-art technology, which allows the shipbuilding, is customized with modern equipment. It will give impact to level of productivity and competitiveness. This study proposes a nonparametric regression cubic spline approach with 1 knot, 2 knots, and 3 knots. The application programs Tibco Spotfire S+ showed that a cubic spline with 2 knots (4.25 and 4.50 gave the best result with the value of GCV = 56.21556, and R2 = 94.03%.Estimation result of cubic spline with 2 knots for the PT. Batamec shipyard = 35.61 MH/CGT, PT. Dok & Perkapalan Surabaya = 27.49 MH/CGT, PT. Karimun Sembawang Shipyard = 27.49 MH/CGT, and PT. PAL Indonesia = 19.89 MH/CGT.

  20. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  1. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    Science.gov (United States)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  2. Nonlinear smoothing for random fields

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha

    1995-01-01

    Stochastic nonlinear elliptic partial differential equations with white noise disturbances are studied in the countably additive measure set up. Introducing the Onsager-Machlup function to the system model, the smoothing problem for maximizing the modified likelihood functional is solved and the

  3. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  4. Cubic B-spline calibration for 3D super-resolution measurements using astigmatic imaging.

    Science.gov (United States)

    Proppert, Sven; Wolter, Steve; Holm, Thorge; Klein, Teresa; van de Linde, Sebastian; Sauer, Markus

    2014-05-05

    In recent years three-dimensional (3D) super-resolution fluorescence imaging by single-molecule localization (localization microscopy) has gained considerable interest because of its simple implementation and high optical resolution. Astigmatic and biplane imaging are experimentally simple methods to engineer a 3D-specific point spread function (PSF), but existing evaluation methods have proven problematic in practical application. Here we introduce the use of cubic B-splines to model the relationship of axial position and PSF width in the above mentioned approaches and compare the performance with existing methods. We show that cubic B-splines are the first method that can combine precision, accuracy and simplicity.

  5. Natural spline interpolation and exponential parameterization for length estimation of curves

    Science.gov (United States)

    Kozera, R.; Wilkołazka, M.

    2017-07-01

    This paper tackles the problem of estimating a length of a regular parameterized curve γ from an ordered sample of interpolation points in arbitrary Euclidean space by a natural spline. The corresponding tabular parameters are not given and are approximated by the so-called exponential parameterization (depending on λ ∈ [0, 1]). The respective convergence orders α(λ) for estimating length of γ are established for curves sampled more-or-less uniformly. The numerical experiments confirm a slow convergence orders α(λ) = 2 for all λ ∈ [0, 1) and a cubic order α(1) = 3 once natural spline is used.

  6. About a family of C2 splines with one free generating function

    Directory of Open Access Journals (Sweden)

    Igor Verlan

    2005-01-01

    Full Text Available The problem of interpolation of discrete set of data on the interval [a, b] representing the function f is investigated. A family of C*C splines with one free generating function is introduced in order to solve this problem. Cubic C*C splines belong to this family. The required conditions which must satisfy the generating function in order to obtain explicit interpolants are presented and examples of generating functions are given. Mathematics Subject Classification: 2000: 65D05, 65D07, 41A05, 41A15.

  7. GA Based Rational cubic B-Spline Representation for Still Image Interpolation

    Directory of Open Access Journals (Sweden)

    Samreen Abbas

    2016-12-01

    Full Text Available In this paper, an image interpolation scheme is designed for 2D natural images. A local support rational cubic spline with control parameters, as interpolatory function, is being optimized using Genetic Algorithm (GA. GA is applied to determine the appropriate values of control parameter used in the description of rational cubic spline. Three state-of-the-art Image Quality Assessment (IQA models with traditional one are hired for comparison with existing image interpolation schemes and perceptual quality check of resulting images. The results show that the proposed scheme is better than the existing ones in comparison.

  8. A Discontinuous Unscented Kalman Filter for Non-Smooth Dynamic Problems

    Directory of Open Access Journals (Sweden)

    Manolis N. Chatzis

    2017-10-01

    Full Text Available For a number of applications, including real/time damage diagnostics as well as control, online methods, i.e., methods which may be implemented on-the-fly, are necessary. Within a system identification context, this implies adoption of filtering algorithms, typically of the Kalman or Bayesian class. For engineered structures, damage or deterioration may often manifest in relation to phenomena such as fracture, plasticity, impact, or friction. Despite the different nature of the previous phenomena, they are described by a common denominator: switching behavior upon occurrence of discrete events. Such events include for example, crack initiation, transitions between elastic and plastic response, or between stick and slide modes. Typically, the state-space equations of such models are non-differentiable at such events, rendering the corresponding systems non-smooth. Identification of non-smooth systems poses greater difficulties than smooth problems of similar computational complexity. Up to a certain extent, this may be attributed to the varying identifiability of such systems, which violates a basic requirement of online Bayesian Identification algorithms, thus affecting their convergence for non-smooth problems. Herein, a treatment to this problem is proposed by the authors, termed the Discontinuous D– modification, where unidentifiable parameters are acknowledged and temporarily excluded from the problem formulation. In this work, the D– modification is illustrated for the case of the Unscented Kalman Filter UKF, resulting in a method termed DUKF, proving superior performance to the conventional, and widely adopted, alternative.

  9. Lectures on constructive approximation Fourier, spline, and wavelet methods on the real line, the sphere, and the ball

    CERN Document Server

    Michel, Volker

    2013-01-01

    Lectures on Constructive Approximation: Fourier, Spline, and Wavelet Methods on the Real Line, the Sphere, and the Ball focuses on spherical problems as they occur in the geosciences and medical imaging. It comprises the author’s lectures on classical approximation methods based on orthogonal polynomials and selected modern tools such as splines and wavelets. Methods for approximating functions on the real line are treated first, as they provide the foundations for the methods on the sphere and the ball and are useful for the analysis of time-dependent (spherical) problems. The author then examines the transfer of these spherical methods to problems on the ball, such as the modeling of the Earth’s or the brain’s interior. Specific topics covered include: * the advantages and disadvantages of Fourier, spline, and wavelet methods * theory and numerics of orthogonal polynomials on intervals, spheres, and balls * cubic splines and splines based on reproducing kernels * multiresolution analysis using wavelet...

  10. Bayesian Age-Period-Cohort Model of Lung Cancer Mortality

    Directory of Open Access Journals (Sweden)

    Bhikhari P. Tharu

    2015-09-01

    Full Text Available Background The objective of this study was to analyze the time trend for lung cancer mortality in the population of the USA by 5 years based on most recent available data namely to 2010. The knowledge of the mortality rates in the temporal trends is necessary to understand cancer burden.Methods Bayesian Age-Period-Cohort model was fitted using Poisson regression with histogram smoothing prior to decompose mortality rates based on age at death, period at death, and birth-cohort.Results Mortality rates from lung cancer increased more rapidly from age 52 years. It ended up to 325 deaths annually for 82 years on average. The mortality of younger cohorts was lower than older cohorts. The risk of lung cancer was lowered from period 1993 to recent periods.Conclusions The fitted Bayesian Age-Period-Cohort model with histogram smoothing prior is capable of explaining mortality rate of lung cancer. The reduction in carcinogens in cigarettes and increase in smoking cessation from around 1960 might led to decreasing trend of lung cancer mortality after calendar period 1993.

  11. A Bayesian joint model of menstrual cycle length and fecundity.

    Science.gov (United States)

    Lum, Kirsten J; Sundaram, Rajeshwari; Buck Louis, Germaine M; Louis, Thomas A

    2016-03-01

    Menstrual cycle length (MCL) has been shown to play an important role in couple fecundity, which is the biologic capacity for reproduction irrespective of pregnancy intentions. However, a comprehensive assessment of its role requires a fecundity model that accounts for male and female attributes and the couple's intercourse pattern relative to the ovulation day. To this end, we employ a Bayesian joint model for MCL and pregnancy. MCLs follow a scale multiplied (accelerated) mixture model with Gaussian and Gumbel components; the pregnancy model includes MCL as a covariate and computes the cycle-specific probability of pregnancy in a menstrual cycle conditional on the pattern of intercourse and no previous fertilization. Day-specific fertilization probability is modeled using natural, cubic splines. We analyze data from the Longitudinal Investigation of Fertility and the Environment Study (the LIFE Study), a couple based prospective pregnancy study, and find a statistically significant quadratic relation between fecundity and menstrual cycle length, after adjustment for intercourse pattern and other attributes, including male semen quality, both partner's age, and active smoking status (determined by baseline cotinine level 100 ng/mL). We compare results to those produced by a more basic model and show the advantages of a more comprehensive approach. © 2015, The International Biometric Society.

  12. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  13. Smooth paths of conditional expectations

    OpenAIRE

    Andruchow, Esteban; Larotonda, Gabriel

    2010-01-01

    Let A be a von Neumann algebra with a finite trace $\\tau$, represented in $H=L^2(A,\\tau)$, and let $B_t\\subset A$ be sub-algebras, for $t$ in an interval $I$. Let $E_t:A\\to B_t$ be the unique $\\tau$-preserving conditional expectation. We say that the path $t\\mapsto E_t$ is smooth if for every $a\\in A$ and $v \\in H$, the map $$ I\

  14. Kernel Bayesian ART and ARTMAP.

    Science.gov (United States)

    Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan

    2018-02-01

    Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Bayesian exploration of recent Chilean earthquakes

    Science.gov (United States)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Liang, Cunren; Agram, Piyush; Owen, Susan; Ortega, Francisco; Minson, Sarah

    2016-04-01

    The South-American subduction zone is an exceptional natural laboratory for investigating the behavior of large faults over the earthquake cycle. It is also a playground to develop novel modeling techniques combining different datasets. Coastal Chile was impacted by two major earthquakes in the last two years: the 2015 M 8.3 Illapel earthquake in central Chile and the 2014 M 8.1 Iquique earthquake that ruptured the central portion of the 1877 seismic gap in northern Chile. To gain better understanding of the distribution of co-seismic slip for those two earthquakes, we derive joint kinematic finite fault models using a combination of static GPS offsets, radar interferograms, tsunami measurements, high-rate GPS waveforms and strong motion data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the Green's functions. The results reveal different rupture behaviors for the 2014 Iquique and 2015 Illapel earthquakes. The 2014 Iquique earthquake involved a sharp slip zone and did not rupture to the trench. The 2015 Illapel earthquake nucleated close to the coast and propagated toward the trench with significant slip apparently reaching the trench or at least very close to the trench. At the inherent resolution of our models, we also present the relationship of co-seismic models to the spatial distribution of foreshocks, aftershocks and fault coupling models.

  16. Bayesian estimation of dynamic matching function for U-V analysis in Japan

    Science.gov (United States)

    Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro

    2012-05-01

    In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.

  17. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    Science.gov (United States)

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  18. A modified linear algebraic approach to electron scattering using cubic splines

    International Nuclear Information System (INIS)

    Kinney, R.A.

    1986-01-01

    A modified linear algebraic approach to the solution of the Schrodiner equation for low-energy electron scattering is presented. The method uses a piecewise cubic-spline approximation of the wavefunction. Results in the static-potential and the static-exchange approximations for e - +H s-wave scattering are compared with unmodified linear algebraic and variational linear algebraic methods. (author)

  19. Quadratic vs cubic spline-wavelets for image representations and compression

    NARCIS (Netherlands)

    P.C. Marais; E.H. Blake; A.A.M. Kuijk (Fons)

    1997-01-01

    textabstractThe Wavelet Transform generates a sparse multi-scale signal representation which may be readily compressed. To implement such a scheme in hardware, one must have a computationally cheap method of computing the necessary transform data. The use of semi-orthogonal quadratic spline wavelets

  20. Quadratic vs cubic spline-wavelets for image representation and compression

    NARCIS (Netherlands)

    P.C. Marais; E.H. Blake; A.A.M. Kuijk (Fons)

    1994-01-01

    htmlabstractThe Wavelet Transform generates a sparse multi-scale signal representation which may be readily compressed. To implement such a scheme in hardware, one must have a computationally cheap method of computing the necessary ransform data. The use of semi-orthogonal quadratic spline wavelets

  1. Two Dimensional Complex Wavenumber Dispersion Analysis using B-Spline Finite Elements Method

    Directory of Open Access Journals (Sweden)

    Y. Mirbagheri

    2016-01-01

    Full Text Available  Grid dispersion is one of the criteria of validating the finite element method (FEM in simulating acoustic or elastic wave propagation. The difficulty usually arisen when using this method for simulation of wave propagation problems, roots in the discontinuous field which causes the magnitude and the direction of the wave speed vector, to vary from one element to the adjacent one. To solve this problem and improve the response accuracy, two approaches are usually suggested: changing the integration method and changing shape functions. The Finite Element iso-geometric analysis (IGA is used in this research. In the IGA, the B-spline or non-uniform rational B-spline (NURBS functions are used which improve the response accuracy, especially in one-dimensional structural dynamics problems. At the boundary of two adjacent elements, the degree of continuity of the shape functions used in IGA can be higher than zero. In this research, for the first time, a two dimensional grid dispersion analysis has been used for wave propagation in plane strain problems using B-spline FEM is presented. Results indicate that, for the same degree of freedom, the grid dispersion of B-spline FEM is about half of the grid dispersion of the classic FEM.

  2. Fractional and complex pseudo-splines and the construction of Parseval frames

    DEFF Research Database (Denmark)

    Massopust, Peter; Forster, Brigitte; Christensen, Ole

    2017-01-01

    in complex transform techniques for signal and image analyses. We also show that in analogue to the integer case, the generalized pseudo-splines lead to constructions of Parseval wavelet frames via the unitary extension principle. The regularity and approximation order of this new class of generalized...

  3. Groundwater head responses due to random stream stage fluctuations using basis splines

    Science.gov (United States)

    Knight, J. H.; Rassam, D. W.

    2007-06-01

    Stream-aquifer interactions are becoming increasingly important processes in water resources and riparian management. The linearized Boussinesq equation describes the transient movement of a groundwater free surface in unconfined flow. Some standard solutions are those corresponding to an input, which is a delta function impulse, or to its integral, a unit step function in the time domain. For more complicated inputs, the response can be expressed as a convolution integral, which must be evaluated numerically. When the input is a time series of measured data, a spline function or piecewise polynomial can easily be fitted to the data. Any such spline function can be expressed in terms of a finite series of basis splines with numerical coefficients. The analytical groundwater response functions corresponding to these basis splines are presented, thus giving a direct and accurate way to calculate the groundwater response for a random time series input representing the stream stage. We use the technique to estimate responses due to a random stream stage time series and show that the predicted heads compare favorably to those obtained from numerical simulations using the Modular Three-Dimensional Finite-Difference Ground-Water Flow Model (MODFLOW) simulations; we then demonstrate how to calculate residence times used for estimating riparian denitrification during bank storage.

  4. B-Spline Approximations of the Gaussian, their Gabor Frame Properties, and Approximately Dual Frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Kim, Hong Oh; Kim, Rae Young

    2017-01-01

    of a very simple form, leading to “almost perfect reconstruction� within any desired error tolerance whenever the product ab is sufficiently small. In contrast, the known (exact) dual windows have a very complicated form. A similar analysis is sketched with the scaled B-splines replaced by certain...

  5. Integration by cell algorithm for Slater integrals in a spline basis

    International Nuclear Information System (INIS)

    Qiu, Y.; Fischer, C.F.

    1999-01-01

    An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems

  6. Validating the Multidimensional Spline Based Global Aerodynamic Model for the Cessna Citation II

    NARCIS (Netherlands)

    De Visser, C.C.; Mulder, J.A.

    2011-01-01

    The validation of aerodynamic models created using flight test data is a time consuming and often costly process. In this paper a new method for the validation of global nonlinear aerodynamic models based on multivariate simplex splines is presented. This new method uses the unique properties of the

  7. Application of Cubic Box Spline Wavelets in the Analysis of Signal Singularities

    Directory of Open Access Journals (Sweden)

    Rakowski Waldemar

    2015-12-01

    Full Text Available In the subject literature, wavelets such as the Mexican hat (the second derivative of a Gaussian or the quadratic box spline are commonly used for the task of singularity detection. The disadvantage of the Mexican hat, however, is its unlimited support; the disadvantage of the quadratic box spline is a phase shift introduced by the wavelet, making it difficult to locate singular points. The paper deals with the construction and properties of wavelets in the form of cubic box splines which have compact and short support and which do not introduce a phase shift. The digital filters associated with cubic box wavelets that are applied in implementing the discrete dyadic wavelet transform are defined. The filters and the algorithme à trous of the discrete dyadic wavelet transform are used in detecting signal singularities and in calculating the measures of signal singularities in the form of a Lipschitz exponent. The article presents examples illustrating the use of cubic box spline wavelets in the analysis of signal singularities.

  8. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather

  9. Bayesian analysis of CCDM models

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  10. Polygonal approximation and energy of smooth knots

    OpenAIRE

    Rawdon, Eric J.; Simon, Jonathan K.

    2003-01-01

    We establish a fundamental connection between smooth and polygonal knot energies, showing that the Minimum Distance Energy for polygons inscribed in a smooth knot converges to the Moebius Energy of the smooth knot as the polygons converge to the smooth knot. However, the polygons must converge in a ``nice'' way, and the energies must be correctly regularized. We determine an explicit error bound between the energies in terms of the number of the edges of the polygon and the Ropelength of the ...

  11. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  12. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis Linda

    2006-01-01

    In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary...... configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...

  13. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis

    In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary...... configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed...

  14. Bayesian variable selection in regression

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, T.J.; Beauchamp, J.J.

    1987-01-01

    This paper is concerned with the selection of subsets of ''predictor'' variables in a linear regression model for the prediction of a ''dependent'' variable. We take a Bayesian approach and assign a probability distribution to the dependent variable through a specification of prior distributions for the unknown parameters in the regression model. The appropriate posterior probabilities are derived for each submodel and methods are proposed for evaluating the family of prior distributions. Examples are given that show the application of the Bayesian methodology. 23 refs., 3 figs.

  15. Inference in hybrid Bayesian networks

    DEFF Research Database (Denmark)

    Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2009-01-01

    Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees a...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....... and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last...

  16. Bayesian methods for proteomic biomarker development

    Directory of Open Access Journals (Sweden)

    Belinda Hernández

    2015-12-01

    In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.

  17. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  18. The humble Bayesian : Model checking from a fully Bayesian perspective

    NARCIS (Netherlands)

    Morey, Richard D.; Romeijn, Jan-Willem; Rouder, Jeffrey N.

    Gelman and Shalizi (2012) criticize what they call the usual story in Bayesian statistics: that the distribution over hypotheses or models is the sole means of statistical inference, thus excluding model checking and revision, and that inference is inductivist rather than deductivist. They present

  19. Weight Smoothing for Generalized Linear Models Using a Laplace Prior

    Science.gov (United States)

    Xia, Xi; Elliott, Michael R.

    2017-01-01

    When analyzing data sampled with unequal inclusion probabilities, correlations between the probability of selection and the sampled data can induce bias if the inclusion probabilities are ignored in the analysis. Weights equal to the inverse of the probability of inclusion are commonly used to correct possible bias. When weights are uncorrelated with the descriptive or model estimators of interest, highly disproportional sample designs resulting in large weights can introduce unnecessary variability, leading to an overall larger mean square error compared to unweighted methods. We describe an approach we term ‘weight smoothing’ that models the interactions between the weights and the estimators as random effects, reducing the root mean square error (RMSE) by shrinking interactions toward zero when such shrinkage is allowed by the data. This article adapts a flexible Laplace prior distribution for the hierarchical Bayesian model to gain a more robust bias-variance tradeoff than previous approaches using normal priors. Simulation and application suggest that under a linear model setting, weight-smoothing models with Laplace priors yield robust results when weighting is necessary, and provide considerable reduction in RMSE otherwise. In logistic regression models, estimates using weight-smoothing models with Laplace priors are robust, but with less gain in efficiency than in linear regression settings. PMID:29225401

  20. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  1. Bayesian models in cognitive neuroscience: A tutorial

    NARCIS (Netherlands)

    O'Reilly, J.X.; Mars, R.B.

    2015-01-01

    This chapter provides an introduction to Bayesian models and their application in cognitive neuroscience. The central feature of Bayesian models, as opposed to other classes of models, is that Bayesian models represent the beliefs of an observer as probability distributions, allowing them to

  2. A Bayesian framework for risk perception

    NARCIS (Netherlands)

    van Erp, H.R.N.

    2017-01-01

    We present here a Bayesian framework of risk perception. This framework encompasses plausibility judgments, decision making, and question asking. Plausibility judgments are modeled by way of Bayesian probability theory, decision making is modeled by way of a Bayesian decision theory, and relevancy

  3. Differentiated Bayesian Conjoint Choice Designs

    NARCIS (Netherlands)

    Z. Sándor (Zsolt); M. Wedel (Michel)

    2003-01-01

    textabstractPrevious conjoint choice design construction procedures have produced a single design that is administered to all subjects. This paper proposes to construct a limited set of different designs. The designs are constructed in a Bayesian fashion, taking into account prior uncertainty about

  4. Bayesian networks in levee reliability

    NARCIS (Netherlands)

    Roscoe, K.; Hanea, A.

    2015-01-01

    We applied a Bayesian network to a system of levees for which the results of traditional reliability analysis showed high failure probabilities, which conflicted with the intuition and experience of those managing the levees. We made use of forty proven strength observations - high water levels with

  5. Bayesian Classification of Image Structures

    DEFF Research Database (Denmark)

    Goswami, Dibyendu; Kalkan, Sinan; Krüger, Norbert

    2009-01-01

    In this paper, we describe work on Bayesian classi ers for distinguishing between homogeneous structures, textures, edges and junctions. We build semi-local classiers from hand-labeled images to distinguish between these four different kinds of structures based on the concept of intrinsic...... dimensionality. The built classi er is tested on standard and non-standard images...

  6. Computational Neuropsychology and Bayesian Inference.

    Science.gov (United States)

    Parr, Thomas; Rees, Geraint; Friston, Karl J

    2018-01-01

    Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine 'prior' beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology - optimal inference with suboptimal priors - and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient's behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.

  7. Income and Consumption Smoothing among US States

    DEFF Research Database (Denmark)

    Sørensen, Bent; Yosha, Oved

    We quantify the amount of cross-sectional income and consumption smoothing achieved within subgroups of states, such as regions or clubs, e.g. the club of rich states. We find that there is much income smoothing between as well as within regions. By contrast, consumption smoothing occurs mainly...... states. The fraction of a shock to gross state products smoothed by the federal tax-transfer system is the same for various regions and other clubs of states. We calculate the scope for consumption smoothing within various regions and clubs, finding that most gains from risk sharing can be achieved...... within US regions. Since a considerable fraction of shocks to gross state product are smoothed within regions, we conclude that existing markets achieve a substantial fraction of the potential welfare gains from interstate income and consumption smoothing. Nonetheless, non-negligible welfare gains may...

  8. Bayesian Alternation During Tactile Augmentation

    Directory of Open Access Journals (Sweden)

    Caspar Mathias Goeke

    2016-10-01

    Full Text Available A large number of studies suggest that the integration of multisensory signals by humans is well described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition, rotation only (native condition, and both augmented and native information (bimodal condition. Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants’ responses with a probit model and calculated the just notable difference (JND. Then we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67 than the Bayesian integration model (χred2= 4.34. Slightly higher accuracy showed a non-Bayesian winner takes all model (χred2= 1.64, which either used only native or only augmented values per subject for prediction. However the performance of the Bayesian alternation model could be substantially improved (χred2= 1.09 utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in

  9. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  10. Decomposition of LiDAR waveforms by B-spline-based modeling

    Science.gov (United States)

    Shen, Xiang; Li, Qing-Quan; Wu, Guofeng; Zhu, Jiasong

    2017-06-01

    Waveform decomposition is a widely used technique for extracting echoes from full-waveform LiDAR data. Most previous studies recommended the Gaussian decomposition approach, which employs the Gaussian function in laser pulse modeling. As the Gaussian-shape assumption is not always satisfied for real LiDAR waveforms, some other probability distributions (e.g., the lognormal distribution, the generalized normal distribution, and the Burr distribution) have also been introduced by researchers to fit sharply-peaked and/or heavy-tailed pulses. However, these models cannot be universally used, because they are only suitable for processing the LiDAR waveforms in particular shapes. In this paper, we present a new waveform decomposition algorithm based on the B-spline modeling technique. LiDAR waveforms are not assumed to have a priori shapes but rather are modeled by B-splines, and the shape of a received waveform is treated as the mixture of finite transmitted pulses after translation and scaling transformation. The performance of the new model was tested using two full-waveform data sets acquired by a Riegl LMS-Q680i laser scanner and an Optech Aquarius laser bathymeter, comparing with three classical waveform decomposition approaches: the Gaussian, generalized normal, and lognormal distribution-based models. The experimental results show that the B-spline model performed the best in terms of waveform fitting accuracy, while the generalized normal model yielded the worst performance in the two test data sets. Riegl waveforms have nearly Gaussian pulse shapes and were well fitted by the Gaussian mixture model, while the B-spline-based modeling algorithm produced a slightly better result by further reducing 6.4% of fitting residuals, largely benefiting from alleviating the adverse impact of the ringing effect. The pulse shapes of Optech waveforms, on the other hand, are noticeably right-skewed. The Gaussian modeling results deviated significantly from original signals, and

  11. Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines

    Science.gov (United States)

    Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya

    2017-11-01

    Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1

  12. Bayesian analysis of rare events

    Science.gov (United States)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  13. Polytomies and Bayesian phylogenetic inference.

    Science.gov (United States)

    Lewis, Paul O; Holder, Mark T; Holsinger, Kent E

    2005-04-01

    Bayesian phylogenetic analyses are now very popular in systematics and molecular evolution because they allow the use of much more realistic models than currently possible with maximum likelihood methods. There are, however, a growing number of examples in which large Bayesian posterior clade probabilities are associated with very short branch lengths and low values for non-Bayesian measures of support such as nonparametric bootstrapping. For the four-taxon case when the true tree is the star phylogeny, Bayesian analyses become increasingly unpredictable in their preference for one of the three possible resolved tree topologies as data set size increases. This leads to the prediction that hard (or near-hard) polytomies in nature will cause unpredictable behavior in Bayesian analyses, with arbitrary resolutions of the polytomy receiving very high posterior probabilities in some cases. We present a simple solution to this problem involving a reversible-jump Markov chain Monte Carlo (MCMC) algorithm that allows exploration of all of tree space, including unresolved tree topologies with one or more polytomies. The reversible-jump MCMC approach allows prior distributions to place some weight on less-resolved tree topologies, which eliminates misleadingly high posteriors associated with arbitrary resolutions of hard polytomies. Fortunately, assigning some prior probability to polytomous tree topologies does not appear to come with a significant cost in terms of the ability to assess the level of support for edges that do exist in the true tree. Methods are discussed for applying arbitrary prior distributions to tree topologies of varying resolution, and an empirical example showing evidence of polytomies is analyzed and discussed.

  14. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  15. Extraction of airways with probabilistic state-space models and Bayesian smoothing

    DEFF Research Database (Denmark)

    Raghavendra, Selvan; Petersen, Jens; Pedersen, Jesper Johannes Holst

    2017-01-01

    Segmenting tree structures is common in several image processing applications. In medical image analysis, reliable segmentations of airways, vessels, neurons and other tree structures can enable important clinical. applications. We present a framework for tracking tree structures comprising of el...

  16. Mobile real-time EEG imaging Bayesian inference with sparse, temporally smooth source priors

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Hansen, Sofie Therese; Stahlhut, Carsten

    2013-01-01

    EEG based real-time imaging of human brain function has many potential applications including quality control, in-line experimental design, brain state decoding, and neuro-feedback. In mobile applications these possibilities are attractive as elements in systems for personal state monitoring...

  17. A Novel Method for Gearbox Fault Detection Based on Biorthogonal B-spline Wavelet

    Directory of Open Access Journals (Sweden)

    Guangbin ZHANG

    2011-10-01

    Full Text Available Localized defects of gearbox tend to result in periodic impulses in the vibration signal, which contain important information for system dynamics analysis. So parameter identification of impulse provides an effective approach for gearbox fault diagnosis. Biorthogonal B-spline wavelet has the properties of compact support, high vanishing moment and symmetry, which are suitable to signal de-noising, fast calculation, and reconstruction. Thus, a novel time frequency distribution method is present for gear fault diagnosis by biorthogonal B-spline wavelet. Simulation study concerning singularity signal shows that this wavelet is effective in identifying the fault feature with coefficients map and coefficients line. Furthermore, an integrated approach consisting of wavelet decomposition, Hilbert transform and power spectrum density is used in applications. The results indicate that this method can extract the gearbox fault characteristics and diagnose the fault patterns effectively.

  18. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    Science.gov (United States)

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  19. High Accuracy Spline Explicit Group (SEG Approximation for Two Dimensional Elliptic Boundary Value Problems.

    Directory of Open Access Journals (Sweden)

    Joan Goh

    Full Text Available Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method.

  20. Finite nucleus Dirac mean field theory and random phase approximation using finite B splines

    International Nuclear Information System (INIS)

    McNeil, J.A.; Furnstahl, R.J.; Rost, E.; Shepard, J.R.; Department of Physics, University of Maryland, College Park, Maryland 20742; Department of Physics, University of Colorado, Boulder, Colorado 80309)

    1989-01-01

    We calculate the finite nucleus Dirac mean field spectrum in a Galerkin approach using finite basis splines. We review the method and present results for the relativistic σ-ω model for the closed-shell nuclei 16 O and 40 Ca. We study the convergence of the method as a function of the size of the basis and the closure properties of the spectrum using an energy-weighted dipole sum rule. We apply the method to the Dirac random-phase-approximation response and present results for the isoscalar 1/sup -/ and 3/sup -/ longitudinal form factors of 16 O and 40 Ca. We also use a B-spline spectral representation of the positive-energy projector to evaluate partial energy-weighted sum rules and compare with nonrelativistic sum rule results

  1. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    Science.gov (United States)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  2. Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set

    Directory of Open Access Journals (Sweden)

    M Barezi

    2011-03-01

    Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.

  3. Numerical simulation of reaction-diffusion systems by modified cubic B-spline differential quadrature method

    International Nuclear Information System (INIS)

    Mittal, R.C.; Rohila, Rajni

    2016-01-01

    In this paper, we have applied modified cubic B-spline based differential quadrature method to get numerical solutions of one dimensional reaction-diffusion systems such as linear reaction-diffusion system, Brusselator system, Isothermal system and Gray-Scott system. The models represented by these systems have important applications in different areas of science and engineering. The most striking and interesting part of the work is the solution patterns obtained for Gray Scott model, reminiscent of which are often seen in nature. We have used cubic B-spline functions for space discretization to get a system of ordinary differential equations. This system of ODE’s is solved by highly stable SSP-RK43 method to get solution at the knots. The computed results are very accurate and shown to be better than those available in the literature. Method is easy and simple to apply and gives solutions with less computational efforts.

  4. Numerical solution of fractional differential equations using cubic B-spline wavelet collocation method

    Science.gov (United States)

    Li, Xinxiu

    2012-10-01

    Physical processes with memory and hereditary properties can be best described by fractional differential equations due to the memory effect of fractional derivatives. For that reason reliable and efficient techniques for the solution of fractional differential equations are needed. Our aim is to generalize the wavelet collocation method to fractional differential equations using cubic B-spline wavelet. Analytical expressions of fractional derivatives in Caputo sense for cubic B-spline functions are presented. The main characteristic of the approach is that it converts such problems into a system of algebraic equations which is suitable for computer programming. It not only simplifies the problem but also speeds up the computation. Numerical results demonstrate the validity and applicability of the method to solve fractional differential equation.

  5. Spline- and wavelet-based models of neural activity in response to natural visual stimulation.

    Science.gov (United States)

    Gerhard, Felipe; Szegletes, Luca

    2012-01-01

    We present a comparative study of the performance of different basis functions for the nonparametric modeling of neural activity in response to natural stimuli. Based on naturalistic video sequences, a generative model of neural activity was created using a stochastic linear-nonlinear-spiking cascade. The temporal dynamics of the spiking response is well captured with cubic splines with equidistant knot spacings. Whereas a sym4-wavelet decomposition performs competitively or only slightly worse than the spline basis, Haar wavelets (or histogram-based models) seem unsuitable for faithfully describing the temporal dynamics of the sensory neurons. This tendency was confirmed with an application to a real data set of spike trains recorded from visual cortex of the awake monkey.

  6. Radial smoothing and closed orbit

    International Nuclear Information System (INIS)

    Burnod, L.; Cornacchia, M.; Wilson, E.

    1983-11-01

    A complete simulation leading to a description of one of the error curves must involve four phases: (1) random drawing of the six set-up points within a normal population having a standard deviation of 1.3 mm; (b) random drawing of the six vertices of the curve in the sextant mode within a normal population having a standard deviation of 1.2 mm. These vertices are to be set with respect to the axis of the error lunes, while this axis has as its origins the positions defined by the preceding drawing; (c) mathematical definition of six parabolic curves and their junctions. These latter may be curves with very slight curvatures, or segments of a straight line passing through the set-up point and having lengths no longer than one LSS. Thus one gets a mean curve for the absolute errors; (d) plotting of the actually observed radial positions with respect to the mean curve (results of smoothing)

  7. Evolution on a smooth landscape

    Science.gov (United States)

    Kessler, David A.; Levine, Herbert; Ridgway, Douglas; Tsimring, Lev

    1997-05-01

    We study in detail a recently proposed simple discrete model for evolution on smooth landscapes. An asymptotic solution of this model for long times is constructed. We find that the dynamics of the population is governed by correlation functions that although being formally down by powers of N (the population size), nonetheless control the evolution process after a very short transient. The long-time behavior can be found analytically since only one of these higher order correlators (the two-point function) is relevant. We compare and contrast the exact findings derived herein with a previously proposed phenomenological treatment employing mean-field theory supplemented with a cutoff at small population density. Finally, we relate our results to the recently studied case of mutation on a totally flat landscape.

  8. Trivariate Local Lagrange Interpolation and Macro Elements of Arbitrary Smoothness

    CERN Document Server

    Matt, Michael Andreas

    2012-01-01

    Michael A. Matt constructs two trivariate local Lagrange interpolation methods which yield optimal approximation order and Cr macro-elements based on the Alfeld and the Worsey-Farin split of a tetrahedral partition. The first interpolation method is based on cubic C1 splines over type-4 cube partitions, for which numerical tests are given. The second is the first trivariate Lagrange interpolation method using C2 splines. It is based on arbitrary tetrahedral partitions using splines of degree nine. The author constructs trivariate macro-elements based on the Alfeld split, where each tetrahedron

  9. Mechanics of Vascular Smooth Muscle.

    Science.gov (United States)

    Ratz, Paul H

    2015-12-15

    Vascular smooth muscle (VSM; see Table 1 for a list of abbreviations) is a heterogeneous biomaterial comprised of cells and extracellular matrix. By surrounding tubes of endothelial cells, VSM forms a regulated network, the vasculature, through which oxygenated blood supplies specialized organs, permitting the development of large multicellular organisms. VSM cells, the engine of the vasculature, house a set of regulated nanomotors that permit rapid stress-development, sustained stress-maintenance and vessel constriction. Viscoelastic materials within, surrounding and attached to VSM cells, comprised largely of polymeric proteins with complex mechanical characteristics, assist the engine with countering loads imposed by the heart pump, and with control of relengthening after constriction. The complexity of this smart material can be reduced by classical mechanical studies combined with circuit modeling using spring and dashpot elements. Evaluation of the mechanical characteristics of VSM requires a more complete understanding of the mechanics and regulation of its biochemical parts, and ultimately, an understanding of how these parts work together to form the machinery of the vascular tree. Current molecular studies provide detailed mechanical data about single polymeric molecules, revealing viscoelasticity and plasticity at the protein domain level, the unique biological slip-catch bond, and a regulated two-step actomyosin power stroke. At the tissue level, new insight into acutely dynamic stress-strain behavior reveals smooth muscle to exhibit adaptive plasticity. At its core, physiology aims to describe the complex interactions of molecular systems, clarifying structure-function relationships and regulation of biological machines. The intent of this review is to provide a comprehensive presentation of one biomachine, VSM. Copyright © 2015 John Wiley & Sons, Inc.

  10. A Galerkin Solution for Burgers' Equation Using Cubic B-Spline Finite Elements

    OpenAIRE

    Soliman, A. A.

    2012-01-01

    Numerical solutions for Burgers’ equation based on the Galerkins’ method using cubic B-splines as both weight and interpolation functions are set up. It is shown that this method is capable of solving Burgers’ equation accurately for values of viscosity ranging from very small to large. Three standard problems are used to validate the proposed algorithm. A linear stability analysis shows that a numerical scheme based on a Cranck-Nicolson approximation in time is unconditionally stable.

  11. A Galerkin Solution for Burgers' Equation Using Cubic B-Spline Finite Elements

    Directory of Open Access Journals (Sweden)

    A. A. Soliman

    2012-01-01

    Full Text Available Numerical solutions for Burgers’ equation based on the Galerkins’ method using cubic B-splines as both weight and interpolation functions are set up. It is shown that this method is capable of solving Burgers’ equation accurately for values of viscosity ranging from very small to large. Three standard problems are used to validate the proposed algorithm. A linear stability analysis shows that a numerical scheme based on a Cranck-Nicolson approximation in time is unconditionally stable.

  12. Cubic spline reflectance estimates using the Viking lander camera multispectral data

    Science.gov (United States)

    Park, S. K.; Huck, F. O.

    1976-01-01

    A technique was formulated for constructing spectral reflectance estimates from multispectral data obtained with the Viking lander cameras. The output of each channel was expressed as a linear function of the unknown spectral reflectance producing a set of linear equations which were used to determine the coefficients in a representation of the spectral reflectance estimate as a natural cubic spline. The technique was used to produce spectral reflectance estimates for a variety of actual and hypothetical spectral reflectances.

  13. Assessing time-by-covariate interactions in relative survival models using restrictive cubic spline functions.

    Science.gov (United States)

    Bolard, P; Quantin, C; Abrahamowicz, M; Esteve, J; Giorgi, R; Chadha-Boreham, H; Binquet, C; Faivre, J

    2002-01-01

    The Cox model is widely used in the evaluation of prognostic factors in clinical research. However, in population-based studies, which assess long-term survival of unselected populations, relative-survival models are often considered more appropriate. In both approaches, the validity of proportional hazards hypothesis should be evaluated. We propose a new method in which restricted cubic spline functions are employed to model time-by-covariate interactions in relative survival analyses. The method allows investigation of the shape of possible dependence of the covariate effect on time without having to specify a particular functional form. Restricted cubic spline functions allow graphing of such time-by-covariate interactions, to test formally the proportional hazards assumption, and also to test the linearity of the time-by-covariate interaction. Application of our new method to assess mortality in colon cancer provides strong evidence against the proportional hazards hypothesis, which is rejected for all prognostic factors. The results corroborate previous analyses of similar data-sets, suggesting the importance of both modelling of non-proportional hazards and relative survival approach. We also demonstrate the advantages of using restricted cubic spline functions for modelling non-proportional hazards in relative-survival analysis. The results provide new insights in the estimated impact of older age and of period of diagnosis. Using restricted cubic splines in a relative survival model allows the representation of both simple and complex patterns of changes in relative risks over time, with a single parsimonious model without a priori assumptions about the functional form of these changes.

  14. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)

    2015-05-15

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  15. Cyclic reduction and FACR methods for piecewise hermite bicubic orthogonal spline collocation

    Science.gov (United States)

    Bialecki, Bernard

    1994-09-01

    Cyclic reduction and Fourier analysis-cyclic reduction (FACR) methods are presented for the solution of the linear systems which arise when orthogonal spline collocation with piecewise Hermite bicubics is applied to boundary value problems for certain separable partial differential equations on a rectangle. On anN×N uniform partition, the cyclic reduction and Fourier analysis-cyclic reduction methods requireO(N2log2N) andO(N2log2log2N) arithmetic operations, respectively.

  16. Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence

    KAUST Repository

    Ait-Haddou, Rachid

    2015-06-19

    We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention of any numerical solver and the rule is optimal, that is, it requires minimal number of nodes. Numerical experiments validating the theoretical results and the error estimates of the quadrature rules are also presented.

  17. A fourth order spline collocation approach for a business cycle model

    Science.gov (United States)

    Sayfy, A.; Khoury, S.; Ibdah, H.

    2013-10-01

    A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.

  18. Discrete quintic spline for boundary value problem in plate deflation theory

    Science.gov (United States)

    Wong, Patricia J. Y.

    2017-07-01

    We propose a numerical scheme for a fourth-order boundary value problem arising from plate deflation theory. The scheme involves a discrete quintic spline, and it is of order 4 if a parameter takes a specific value, else it is of order 2. We also present a well known numerical example to illustrate the efficiency of our method as well as to compare with other numerical methods proposed in the literature.

  19. Nonlinear Multivariate Spline-Based Control Allocation for High-Performance Aircraft

    OpenAIRE

    Tol, H.J.; De Visser, C.C.; Van Kampen, E.; Chu, Q.P.

    2014-01-01

    High performance flight control systems based on the nonlinear dynamic inversion (NDI) principle require highly accurate models of aircraft aerodynamics. In general, the accuracy of the internal model determines to what degree the system nonlinearities can be canceled; the more accurate the model, the better the cancellation, and with that, the higher the performance of the controller. In this paper a new control system is presented that combines NDI with multivariate simplex spline based con...

  20. A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid

    Science.gov (United States)

    Sulaimanov, Z. M.; Shumilov, B. M.

    2017-10-01

    For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.

  1. Resolving the mass-anisotropy degeneracy of the spherically symmetric Jeans equation - II. Optimum smoothing and model validation

    Science.gov (United States)

    Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.

    2014-09-01

    The spherical Jeans equation is widely used to estimate the mass content of stellar systems with apparent spherical symmetry. However, this method suffers from a degeneracy between the assumed mass density and the kinematic anisotropy profile, β(r). In a previous work, we laid the theoretical foundations for an algorithm that combines smoothing B splines with equations from dynamics to remove this degeneracy. Specifically, our method reconstructs a unique kinematic profile of σ _{rr}^2 and σ _{tt}^2 for an assumed free functional form of the potential and mass density (Φ, ρ) and given a set of observed line-of-sight velocity dispersion measurements, σ _los^2. In Paper I, we demonstrated the efficiency of our algorithm with a very simple example and we commented on the need for optimum smoothing of the B-spline representation; this is in order to avoid unphysical variational behaviour when we have large uncertainty in our data. In the current contribution, we present a process of finding the optimum smoothing for a given data set by using information of the behaviour from known ideal theoretical models. Markov Chain Monte Carlo methods are used to explore the degeneracy in the dynamical modelling process. We validate our model through applications to synthetic data for systems with constant or variable mass-to-light ratio Υ. In all cases, we recover excellent fits of theoretical functions to observables and unique solutions. Our algorithm is a robust method for the removal of the mass-anisotropy degeneracy of the spherically symmetric Jeans equation for an assumed functional form of the mass density.

  2. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    Energy Technology Data Exchange (ETDEWEB)

    Ruberti, M.; Averbukh, V. [Department of Physics, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Decleva, P. [Dipartimento di Scienze Chimiche, Universita’ di Trieste, Via Giorgieri 1, I-34127 Trieste (Italy)

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.

  3. Nonlinear registration using B-spline feature approximation and image similarity

    Science.gov (United States)

    Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-07-01

    The warping methods are broadly classified into the image-matching method based on similar pixel intensity distribution and the feature-matching method using distinct anatomical feature. Feature based methods may fail to match local variation of two images. However, the method globally matches features well. False matches corresponding to local minima of the underlying energy functions can be obtained through the similarity based methods. To avoid local minima problem, we proposes non-linear deformable registration method utilizing global information of feature matching and the local information of image matching. To define the feature, gray matter and white matter of brain tissue are segmented by Fuzzy C-Mean (FCM) algorithm. B-spline approximation technique is used for feature matching. We use a multi-resolution B-spline approximation method which modifies multilevel B-spline interpolation method. It locally changes the resolution of the control lattice in proportion to the distance between features of two images. Mutual information is used for similarity measure. The deformation fields are locally refined until maximize the similarity. In two 3D T1 weighted MRI test, this method maintained the accuracy by conventional image matching methods without the local minimum problem.

  4. On developing B-spline registration algorithms for multi-core processors.

    Science.gov (United States)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-11-07

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  5. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  6. Bayesian inversion of refraction seismic traveltime data

    Science.gov (United States)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test

  7. Bayesian Model Averaging for Propensity Score Analysis.

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  8. Pedestrian dynamics via Bayesian networks

    Science.gov (United States)

    Venkat, Ibrahim; Khader, Ahamad Tajudin; Subramanian, K. G.

    2014-06-01

    Studies on pedestrian dynamics have vital applications in crowd control management relevant to organizing safer large scale gatherings including pilgrimages. Reasoning pedestrian motion via computational intelligence techniques could be posed as a potential research problem within the realms of Artificial Intelligence. In this contribution, we propose a "Bayesian Network Model for Pedestrian Dynamics" (BNMPD) to reason the vast uncertainty imposed by pedestrian motion. With reference to key findings from literature which include simulation studies, we systematically identify: What are the various factors that could contribute to the prediction of crowd flow status? The proposed model unifies these factors in a cohesive manner using Bayesian Networks (BNs) and serves as a sophisticated probabilistic tool to simulate vital cause and effect relationships entailed in the pedestrian domain.

  9. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

     Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...... primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples...

  10. BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS

    Directory of Open Access Journals (Sweden)

    Thordis Linda Thorarinsdottir

    2011-05-01

    Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.

  11. Bayesian Inference on Proportional Elections

    Science.gov (United States)

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  12. Bayesian analyses of cognitive architecture.

    Science.gov (United States)

    Houpt, Joseph W; Heathcote, Andrew; Eidels, Ami

    2017-06-01

    The question of cognitive architecture-how cognitive processes are temporally organized-has arisen in many areas of psychology. This question has proved difficult to answer, with many proposed solutions turning out to be spurious. Systems factorial technology (Townsend & Nozawa, 1995) provided the first rigorous empirical and analytical method of identifying cognitive architecture, using the survivor interaction contrast (SIC) to determine when people are using multiple sources of information in parallel or in series. Although the SIC is based on rigorous nonparametric mathematical modeling of response time distributions, for many years inference about cognitive architecture has relied solely on visual assessment. Houpt and Townsend (2012) recently introduced null hypothesis significance tests, and here we develop both parametric and nonparametric (encompassing prior) Bayesian inference. We show that the Bayesian approaches can have considerable advantages. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  14. Bayesian inference on proportional elections.

    Directory of Open Access Journals (Sweden)

    Gabriel Hideki Vatanabe Brunello

    Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  15. Very smooth points of spaces of operators

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Our notation and terminology is standard and can be found in [HWW]. For a Banach space X by ∂eX1 we denote the set of extreme points. 2. Very smooth points. Let M ⊂ X be a closed subspace. It was observed in [MR] that if x ∈ M is a smooth point of X then it is a smooth point of M. It is easy to see that if every continuous.

  16. Smoothed analysis of complex conic condition numbers

    OpenAIRE

    Buergisser, Peter; Cucker, Felipe; Lotz, Martin

    2006-01-01

    Smoothed analysis of complexity bounds and condition numbers has been done, so far, on a case by case basis. In this paper we consider a reasonably large class of condition numbers for problems over the complex numbers and we obtain smoothed analysis estimates for elements in this class depending only on geometric invariants of the corresponding sets of ill-posed inputs. These estimates are for a version of smoothed analysis proposed in this paper which, to the best of our knowledge, appears ...

  17. Space Shuttle RTOS Bayesian Network

    Science.gov (United States)

    Morris, A. Terry; Beling, Peter A.

    2001-01-01

    With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores

  18. Multiview Bayesian Correlated Component Analysis

    DEFF Research Database (Denmark)

    Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai

    2015-01-01

    are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....

  19. Reliability analysis with Bayesian networks

    OpenAIRE

    Zwirglmaier, Kilian Martin

    2017-01-01

    Bayesian networks (BNs) represent a probabilistic modeling tool with large potential for reliability engineering. While BNs have been successfully applied to reliability engineering, there are remaining issues, some of which are addressed in this work. Firstly a classification of BN elicitation approaches is proposed. Secondly two approximate inference approaches, one of which is based on discretization and the other one on sampling, are proposed. These approaches are applicable to hybrid/con...

  20. Interim Bayesian Persuasion: First Steps

    OpenAIRE

    Perez, Eduardo

    2015-01-01

    This paper makes a first attempt at building a theory of interim Bayesian persuasion. I work in a minimalist model where a low or high type sender seeks validation from a receiver who is willing to validate high types exclusively. After learning her type, the sender chooses a complete conditional information structure for the receiver from a possibly restricted feasible set. I suggest a solution to this game that takes into account the signaling potential of the sender's choice.

  1. Bayesian Sampling using Condition Indicators

    DEFF Research Database (Denmark)

    Faber, Michael H.; Sørensen, John Dalsgaard

    2002-01-01

    . This allows for a Bayesian formulation of the indicators whereby the experience and expertise of the inspection personnel may be fully utilized and consistently updated as frequentistic information is collected. The approach is illustrated on an example considering a concrete structure subject to corrosion....... It is shown how half-cell potential measurements may be utilized to update the probability of excessive repair after 50 years....

  2. Computational Neuropsychology and Bayesian Inference

    Directory of Open Access Journals (Sweden)

    Thomas Parr

    2018-02-01

    Full Text Available Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine ‘prior’ beliefs with a generative (predictive model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world. This draws upon the notion of a Bayes optimal pathology – optimal inference with suboptimal priors – and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient’s behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.

  3. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  4. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors.

    Science.gov (United States)

    Sarkar, Abhra; Mallick, Bani K; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J

    2014-10-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  5. Semiparametric Bayesian Analysis of Nutritional Epidemiology Data in the Presence of Measurement Error

    KAUST Repository

    Sinha, Samiran

    2009-08-10

    We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.

  6. Income and Consumption Smoothing among US States

    DEFF Research Database (Denmark)

    Sørensen, Bent; Yosha, Oved

    within regions but not between regions. This suggests that capital markets transcend regional barriers while credit markets are regional in their nature. Smoothing within the club of rich states is accomplished mainly via capital markets whereas consumption smoothing is dominant within the club of poor...... states. The fraction of a shock to gross state products smoothed by the federal tax-transfer system is the same for various regions and other clubs of states. We calculate the scope for consumption smoothing within various regions and clubs, finding that most gains from risk sharing can be achieved...

  7. Frequency offset estimation in OFDM systems using Bayesian filtering

    Science.gov (United States)

    Yu, Yihua

    2011-10-01

    Orthogonal frequency division multiplexing (OFDM) is sensitive to carrier frequency offset (CFO) that causes inter-carrier interference (ICI). In this paper, we present two schemes for the CFO estimation, which are based on rejection sampling (RS) and a form of particle filtering (PF) called kernel smoothing technique, respectively. The first scheme is offline estimation, where the observations contained in the OFDM training symbol are treated in the batch mode. The second scheme is online estimation, where the observations in the OFDM training symbol are treated in the sequential manner. Simulations are provided to illustrate the performances of the schemes. Performance comparisons of the two schemes and with other Bayesian methods are provided. Simulation results show that the two schemes are effective when estimating the CFO and can effectively combat the effect of ICI in OFDM systems.

  8. Bayesian methods applied to GWAS.

    Science.gov (United States)

    Fernando, Rohan L; Garrick, Dorian

    2013-01-01

    Bayesian multiple-regression methods are being successfully used for genomic prediction and selection. These regression models simultaneously fit many more markers than the number of observations available for the analysis. Thus, the Bayes theorem is used to combine prior beliefs of marker effects, which are expressed in terms of prior distributions, with information from data for inference. Often, the analyses are too complex for closed-form solutions and Markov chain Monte Carlo (MCMC) sampling is used to draw inferences from posterior distributions. This chapter describes how these Bayesian multiple-regression analyses can be used for GWAS. In most GWAS, false positives are controlled by limiting the genome-wise error rate, which is the probability of one or more false-positive results, to a small value. As the number of test in GWAS is very large, this results in very low power. Here we show how in Bayesian GWAS false positives can be controlled by limiting the proportion of false-positive results among all positives to some small value. The advantage of this approach is that the power of detecting associations is not inversely related to the number of markers.

  9. Image interpolation via graph-based Bayesian label propagation.

    Science.gov (United States)

    Xianming Liu; Debin Zhao; Jiantao Zhou; Wen Gao; Huifang Sun

    2014-03-01

    In this paper, we propose a novel image interpolation algorithm via graph-based Bayesian label propagation. The basic idea is to first create a graph with known and unknown pixels as vertices and with edge weights encoding the similarity between vertices, then the problem of interpolation converts to how to effectively propagate the label information from known points to unknown ones. This process can be posed as a Bayesian inference, in which we try to combine the principles of local adaptation and global consistency to obtain accurate and robust estimation. Specially, our algorithm first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones. Then, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on all samples. Moreover, a graph-Laplacian-based manifold regularization term is incorporated to penalize the global smoothness of intensity labels, such smoothing can alleviate the insufficient training of the local models and make them more robust. Finally, we construct a unified objective function to combine together the global loss of the locally linear regression, square error of prediction bias on the available LR samples, and the manifold regularization term. It can be solved with a closed-form solution as a convex optimization problem. Experimental results demonstrate that the proposed method achieves competitive performance with the state-of-the-art image interpolation algorithms.

  10. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines

    Directory of Open Access Journals (Sweden)

    Laura M. Grajeda

    2016-01-01

    Full Text Available Abstract Background Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. Methods We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Results Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001 when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001 and slopes (p < 0.001 of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001, which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and

  11. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  12. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan

    2004-01-01

    We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...... and differentiating these circuits in time linear in their size. We report on experimental results showing the successful compilation, and efficient inference, on relational Bayesian networks whose {\\primula}--generated propositional instances have thousands of variables, and whose jointrees have clusters...

  13. Bayesian Posterior Distributions Without Markov Chains

    OpenAIRE

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  14. Local Transfer Coefficient, Smooth Channel

    Directory of Open Access Journals (Sweden)

    R. T. Kukreja

    1998-01-01

    Full Text Available Naphthalene sublimation technique and the heat/mass transfer analogy are used to determine the detailed local heat/mass transfer distributions on the leading and trailing walls of a twopass square channel with smooth walls that rotates about a perpendicular axis. Since the variation of density is small in the flow through the channel, buoyancy effect is negligible. Results show that, in both the stationary and rotating channel cases, very large spanwise variations of the mass transfer exist in he turn and in the region immediately downstream of the turn in the second straight pass. In the first straight pass, the rotation-induced Coriolis forces reduce the mass transfer on the leading wall and increase the mass transfer on the trailing wall. In the turn, rotation significantly increases the mass transfer on the leading wall, especially in the upstream half of the turn. Rotation also increases the mass transfer on the trailing wall, more in the downstream half of the turn than in the upstream half of the turn. Immediately downstream of the turn, rotation causes the mass transfer to be much higher on the trailing wall near the downstream corner of the tip of the inner wall than on the opposite leading wall. The mass transfer in the second pass is higher on the leading wall than on the trailing wall. A slower flow causes higher mass transfer enhancement in the turn on both the leading and trailing walls.

  15. Excitation of Mytilus smooth muscle.

    Science.gov (United States)

    Twarog, B M

    1967-10-01

    1. Membrane potentials and tension were recorded during nerve stimulation and direct stimulation of smooth muscle cells of the anterior byssus retractor muscle of Mytilus edulis L.2. The resting potential averaged 65 mV (range 55-72 mV).3. Junction potentials reached 25 mV and decayed to one half maximum amplitude in 500 msec. Spatial summation and facilitation of junction potentials were observed.4. Action potentials, 50 msec in duration and up to 50 mV in amplitude were fired at a membrane potential of 35-40 mV. No overshoot was observed.5. Contraction in response to neural stimulation was associated with spike discharge. Measurement of tension and depolarization in muscle bundles at high K(+) indicated that tension is only produced at membrane potentials similar to those achieved by spike discharge.6. Blocking of junction potentials, spike discharge and contraction by methantheline, an acetylcholine antagonist, supports the hypothesis that the muscle is excited by cholinergic nerves. However, evidence of a presynaptic action of methantheline complicates this argument.

  16. Smooth horizons and quantum ripples

    International Nuclear Information System (INIS)

    Golovnev, Alexey

    2015-01-01

    Black holes are unique objects which allow for meaningful theoretical studies of strong gravity and even quantum gravity effects. An infalling and a distant observer would have very different views on the structure of the world. However, a careful analysis has shown that it entails no genuine contradictions for physics, and the paradigm of observer complementarity has been coined. Recently this picture was put into doubt. In particular, it was argued that in old black holes a firewall must form in order to protect the basic principles of quantum mechanics. This AMPS paradox has already been discussed in a vast number of papers with different attitudes and conclusions. Here we want to argue that a possible source of confusion is the neglect of quantum gravity effects. Contrary to widespread perception, it does not necessarily mean that effective field theory is inapplicable in rather smooth neighbourhoods of large black hole horizons. The real offender might be an attempt to consistently use it over the huge distances from the near-horizon zone of old black holes to the early radiation. We give simple estimates to support this viewpoint and show how the Page time and (somewhat more speculative) scrambling time do appear. (orig.)

  17. Modeling relationship between mean years of schooling and household expenditure at Central Sulawesi using constrained B-splines (COBS) in quantile regression

    Science.gov (United States)

    Hudoyo, Luhur Partomo; Andriyana, Yudhie; Handoko, Budhi

    2017-03-01

    Quantile regression illustrates the distribution of conditional variable responses to various quantile desired values. Each quantile characterizes a certain point (center or tail) of a conditional distribution. This analysis is very useful for asymmetric conditional distribution, e.g. solid at the tail of the distribution, the truncated distribution and existence of outliers. One approach nonparametric method of predicting the conditional quantile objective function is Constrained B-Splines (COBS). COBS is a smoothing technique to accommodate the addition of constraints such as monotonicity, convexity and periodicity. In this study, we will change the minimum conditional quantile objective function in COBS into a linear programming problem. Linear programming problem is defined as the problem of minimizing and maximizing a linear function subject to linear constraints. The constraints may be equalities or inequalities. This research will discuss the relationship between education (mean years of schooling) and economic (household expenditure) levels at Central Sulawesi Province in 2014 which household level data provide more systematic evidence on positive relationship. So monotonicity (increasing) constraints will be used in COBS quantile regression model.

  18. 3rd Bayesian Young Statisticians Meeting

    CERN Document Server

    Lanzarone, Ettore; Villalobos, Isadora; Mattei, Alessandra

    2017-01-01

    This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).

  19. Learning dynamic Bayesian networks with mixed variables

    DEFF Research Database (Denmark)

    Bøttcher, Susanne Gammelgaard

    This paper considers dynamic Bayesian networks for discrete and continuous variables. We only treat the case, where the distribution of the variables is conditional Gaussian. We show how to learn the parameters and structure of a dynamic Bayesian network and also how the Markov order can be learn....... An automated procedure for specifying prior distributions for the parameters in a dynamic Bayesian network is presented. It is a simple extension of the procedure for the ordinary Bayesian networks. Finally the W¨olfer?s sunspot numbers are analyzed....

  20. Very smooth points of spaces of operators

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 113; Issue 1 ... We show that when the space of compact operators is an -ideal in the space of bounded operators, a very smooth operator attains its norm at a unique vector (up to a constant multiple) and ( ) is a very smooth point of the range space.

  1. Adams operations in smooth K-theory

    OpenAIRE

    Bunke, Ulrich

    2009-01-01

    We show that the Adams operations in complex K-theory lift to operations in smooth K-theory. The main result is a Riemann-Roch type theorem about the compatibility of the Adams operations and the integration in smooth K-theory.

  2. Smoothed Analysis of Local Search Algorithms

    NARCIS (Netherlands)

    Manthey, Bodo; Dehne, Frank; Sack, Jörg-Rüdiger; Stege, Ulrike

    2015-01-01

    Smoothed analysis is a method for analyzing the performance of algorithms for which classical worst-case analysis fails to explain the performance observed in practice. Smoothed analysis has been applied to explain the performance of a variety of algorithms in the last years. One particular class of

  3. Smoothing a Piecewise-Smooth: An Example from Plankton Population Dynamics

    DEFF Research Database (Denmark)

    Piltz, Sofia Helena

    2016-01-01

    In this work we discuss a piecewise-smooth dynamical system inspired by plankton observations and constructed for one predator switching its diet between two different types of prey. We then discuss two smooth formulations of the piecewise-smooth model obtained by using a hyperbolic tangent funct...

  4. Mediators on human airway smooth muscle.

    Science.gov (United States)

    Armour, C; Johnson, P; Anticevich, S; Ammit, A; McKay, K; Hughes, M; Black, J

    1997-01-01

    1. Bronchial hyperresponsiveness in asthma may be due to several abnormalities, but must include alterations in the airway smooth muscle responsiveness and/or volume. 2. Increased responsiveness of airway smooth muscle in vitro can be induced by certain inflammatory cell products and by induction of sensitization (atopy). 3. Increased airway smooth muscle growth can also be induced by inflammatory cell products and atopic serum. 4. Mast cell numbers are increased in the airways of asthmatics and, in our studies, in airway smooth muscle that is sensitized and hyperresponsive. 5. We propose that there is a relationship between mast cells and airway smooth muscle cells which, once an allergic process has been initiated, results in the development of critical features in the lungs in asthma.

  5. Bayesian phylogeography finds its roots.

    Directory of Open Access Journals (Sweden)

    Philippe Lemey

    2009-09-01

    Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.

  6. Bayesian flood forecasting methods: A review

    Science.gov (United States)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been

  7. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    2013-01-01

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  8. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  9. Attention in a bayesian framework

    DEFF Research Database (Denmark)

    Whiteley, Louise Emma; Sahani, Maneesh

    2012-01-01

    , and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...... settings, where cues shape expectations about a small number of upcoming stimuli and thus convey "prior" information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its...

  10. Revision Total Hip Arthroplasty With a Monoblock Splined Tapered Grit-Blasted Titanium Stem.

    Science.gov (United States)

    Hellman, Michael D; Kearns, Sean M; Bohl, Daniel D; Haughom, Bryan D; Levine, Brett R

    2017-12-01

    In revision total hip arthroplasty (THA), proximal femoral bone loss creates a challenge of achieving adequate stem fixation. The purpose of this study was to examine the outcomes of a monoblock, splined, tapered femoral stem in revision THA. Outcomes of revision THA using a nonmodular, splined, tapered femoral stem from a single surgeon were reviewed. With a minimum of 2-year follow-up, there were 68 cases (67 patients). Paprosky classification was 3A or greater in 85% of the cases. Preoperative and postoperative Harris Hip Scores (HHS), radiographic subsidence and osseointegration, limb length discrepancy, complications, and reoperations were analyzed. The Harris Hip Score improved from 37.4 ± SD 19.4 preoperatively to 64.6 ± SD 21.8 at final follow-up (P revision procedures-8 for septic indications and 8 for aseptic indications. Subsidence occurred at a rate of 3.0% and dislocation at 7.4%. Limb length discrepancy of more than 1 cm after revision was noted in 13.6% of patients. Bone ingrowth was observed in all but 4 patients (94.1%). At 4-year follow-up, Kaplan-Meier estimated survival was 72.9% (95% confidence interval [CI] 57.0-83.8) for all causes of revision, 86.6% (95% CI 72.0-93.9) for all aseptic revision, and 95.5% (95% CI 86.8-98.5) for aseptic femoral revision. Although complications were significant, revision for femoral aseptic loosening occurred in only 3 patients. Given the ability of this monoblock splined tapered stem to adequately provide fixation during complex revision THA, it remains a viable option in the setting of substantial femoral bone defects. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure that all access is authorized, 3D printing models should be encrypted before being transmitted and stored. A novel perceptual encryption algorithm for 3D printing models for secure storage and transmission is presented in this paper. A facet of 3D printing model is extracted to interpolate a spline curve of degree 2 in three-dimensional space that is determined by three control points, the curvature coefficients of degree 2, and an interpolating vector. Three control points, the curvature coefficients, and interpolating vector of the spline curve of degree 2 are encrypted by a secret key. The encrypted features of the spline curve are then used to obtain the encrypted 3D printing model by inverse interpolation and geometric distortion. The results of experiments and evaluations prove that the entire 3D triangle model is altered and deformed after the perceptual encryption process. The proposed algorithm is responsive to the various formats of 3D printing models. The results of the perceptual encryption process is superior to those of previous methods. The proposed algorithm also provides a better method and more security than previous methods.

  12. Smooth halos in the cosmic web

    International Nuclear Information System (INIS)

    Gaite, José

    2015-01-01

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness

  13. Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

    Science.gov (United States)

    Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.

    2012-03-01

    In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.

  14. Robust bayesian inference of generalized Pareto distribution ...

    African Journals Online (AJOL)

    Abstract. In this work, robust Bayesian estimation of the generalized Pareto distribution is proposed. The methodology is presented in terms of oscillation of posterior risks of the Bayesian estimators. By using a Monte Carlo simulation study, we show that, under a suitable generalized loss function, we can obtain a robust ...

  15. Bayesian Decision Theoretical Framework for Clustering

    Science.gov (United States)

    Chen, Mo

    2011-01-01

    In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…

  16. Using Bayesian belief networks in adaptive management.

    Science.gov (United States)

    J.B. Nyberg; B.G. Marcot; R. Sulyma

    2006-01-01

    Bayesian belief and decision networks are relatively new modeling methods that are especially well suited to adaptive-management applications, but they appear not to have been widely used in adaptive management to date. Bayesian belief networks (BBNs) can serve many purposes for practioners of adaptive management, from illustrating system relations conceptually to...

  17. Calibration in a Bayesian modelling framework

    NARCIS (Netherlands)

    Jansen, M.J.W.; Hagenaars, T.H.J.

    2004-01-01

    Bayesian statistics may constitute the core of a consistent and comprehensive framework for the statistical aspects of modelling complex processes that involve many parameters whose values are derived from many sources. Bayesian statistics holds great promises for model calibration, provides the

  18. Particle identification in ALICE: a Bayesian approach

    NARCIS (Netherlands)

    Adam, J.; Adamova, D.; Aggarwal, M. M.; Rinella, G. Aglieri; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anticic, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshaeuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badala, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Balasubramanian, S.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnafoeldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Bathen, B.; Batigne, G.; Camejo, A. Batista; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Benacek, P.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielcik, J.; Bielcikova, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Boggild, H.; Boldizsar, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossu, F.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Cai, X.; Caines, H.; Diaz, L. Calero; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castellanos, J. Castillo; Castro, A. J.; Casula, E. A. R.; Sanchez, C. Ceballos; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Barroso, V. Chibante; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Balbastre, G. Conesa; del Valle, Z. Conesa; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Morales, Y. Corrales; Cortes Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Denes, E.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Corchero, M. A. Diaz; Dietel, T.; Dillenseger, P.; Divia, R.; Djuvsland, O.; Dobrin, A.; Gimenez, D. Domenicis; Doenigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernandez Tellez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Girard, M. Fusco; Gaardhoje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, A.; Gheata, M.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glaessel, P.; Gomez Coral, D. M.; Ramirez, A. Gomez; Gonzalez, A. S.; Gonzalez, V.; Gonzalez-Zamora, P.; Gorbunov, S.; Goerlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Haake, R.; Haaland, O.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbaer, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Horak, D.; Hosokawa, R.; Hristov, P.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Incani, E.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacazio, N.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Bustamante, R. T. Jimenez; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kaplin, V.; Kar, S.; Uysal, A. Karasu; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, M. Mohisin; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, J. S.; Kim, M.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein-Boesing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kostarakis, P.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Meethaleveedu, G. Koyithatta; Kralik, I.; Kravcakova, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kucera, V.; Kuijer, P. G.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Ladron de Guevara, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; Monzon, I. Leon; Leon Vargas, H.; Leoncino, M.; Levai, P.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; Torres, E. Lopez; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mares, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marin, A.; Markert, C.; Marquard, M.; Martin, N. A.; Blanco, J. Martin; Martinengo, P.; Martinez, M. I.; Garcia, G. Martinez; Pedreira, M. Martinez; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Perez, J. Mercado; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miskowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montano Zetina, L.; Montes, E.; De Godoy, D. A. Moreira; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Muehlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, D.; Pagano, P.; Paic, G.; Pal, S. K.; Pan, J.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Da Costa, H. Pereira; Peresunko, D.; Lara, C. E. Perez; Lezama, E. Perez; Peskov, V.; Pestov, Y.; Petracek, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Ploskon, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Raesaenen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Rodriguez Cahuantzi, M.; Manso, A. Rodriguez; Roed, K.; Rogochaya, E.; Rohr, D.; Roehrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Montero, A. J. Rubio; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Safarik, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Sarkar, N.; Sarma, P.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Sefcik, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shahzad, M. I.; Shangaraev, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; de Souza, R. D.; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Sumbera, M.; Sumowidagdo, S.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Munoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thaeder, J.; Thakur, D.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Palomo, L. Valencia; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vyvre, P. Vande; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Vercellin, E.; Vergara Limon, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Baillie, O. Villalobos; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Voelkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrlakova, J.; Vulpescu, B.; Wagner, B.; Wagner, J.; Wang, H.; Watanabe, D.; Watanabe, Y.; Weiser, D. F.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yang, H.; Yano, S.; Yasin, Z.; Yokoyama, H.; Yoo, I. -K.; Yoon, J. H.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Zavada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, C.; Zhao, C.; Zhigareva, N.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.; Collaboration, ALICE

    2016-01-01

    We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian

  19. Bayesian Network for multiple hypthesis tracking

    NARCIS (Netherlands)

    Zajdel, W.P.; Kröse, B.J.A.; Blockeel, H.; Denecker, M.

    2002-01-01

    For a flexible camera-to-camera tracking of multiple objects we model the objects behavior with a Bayesian network and combine it with the multiple hypohesis framework that associates observations with objects. Bayesian networks offer a possibility to factor complex, joint distributions into a

  20. Bayesian learning theory applied to human cognition.

    Science.gov (United States)

    Jacobs, Robert A; Kruschke, John K

    2011-01-01

    Probabilistic models based on Bayes' rule are an increasingly popular approach to understanding human cognition. Bayesian models allow immense representational latitude and complexity. Because they use normative Bayesian mathematics to process those representations, they define optimal performance on a given task. This article focuses on key mechanisms of Bayesian information processing, and provides numerous examples illustrating Bayesian approaches to the study of human cognition. We start by providing an overview of Bayesian modeling and Bayesian networks. We then describe three types of information processing operations-inference, parameter learning, and structure learning-in both Bayesian networks and human cognition. This is followed by a discussion of the important roles of prior knowledge and of active learning. We conclude by outlining some challenges for Bayesian models of human cognition that will need to be addressed by future research. WIREs Cogn Sci 2011 2 8-21 DOI: 10.1002/wcs.80 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  1. Properties of the Bayesian Knowledge Tracing Model

    Science.gov (United States)

    van de Sande, Brett

    2013-01-01

    Bayesian Knowledge Tracing is used very widely to model student learning. It comes in two different forms: The first form is the Bayesian Knowledge Tracing "hidden Markov model" which predicts the probability of correct application of a skill as a function of the number of previous opportunities to apply that skill and the model…

  2. Plug & Play object oriented Bayesian networks

    DEFF Research Database (Denmark)

    Bangsø, Olav; Flores, J.; Jensen, Finn Verner

    2003-01-01

    and secondly, to gain efficiency during modification of an object oriented Bayesian network. To accomplish these two goals we have exploited a mechanism allowing local triangulation of instances to develop a method for updating the junction trees associated with object oriented Bayesian networks in highly...

  3. Using Bayesian Networks to Improve Knowledge Assessment

    Science.gov (United States)

    Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra

    2013-01-01

    In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…

  4. Bayesian models: A statistical primer for ecologists

    Science.gov (United States)

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  5. Modeling Diagnostic Assessments with Bayesian Networks

    Science.gov (United States)

    Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego

    2007-01-01

    This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…

  6. A Bayesian localized conditional autoregressive model for estimating the health effects of air pollution.

    Science.gov (United States)

    Lee, Duncan; Rushworth, Alastair; Sahu, Sujit K

    2014-06-01

    Estimation of the long-term health effects of air pollution is a challenging task, especially when modeling spatial small-area disease incidence data in an ecological study design. The challenge comes from the unobserved underlying spatial autocorrelation structure in these data, which is accounted for using random effects modeled by a globally smooth conditional autoregressive model. These smooth random effects confound the effects of air pollution, which are also globally smooth. To avoid this collinearity a Bayesian localized conditional autoregressive model is developed for the random effects. This localized model is flexible spatially, in the sense that it is not only able to model areas of spatial smoothness, but also it is able to capture step changes in the random effects surface. This methodological development allows us to improve the estimation performance of the covariate effects, compared to using traditional conditional auto-regressive models. These results are established using a simulation study, and are then illustrated with our motivating study on air pollution and respiratory ill health in Greater Glasgow, Scotland in 2011. The model shows substantial health effects of particulate matter air pollution and nitrogen dioxide, whose effects have been consistently attenuated by the currently available globally smooth models. © 2014, The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  7. A cubic B-spline Galerkin approach for the numerical simulation of the GEW equation

    Directory of Open Access Journals (Sweden)

    S. Battal Gazi Karakoç

    2016-02-01

    Full Text Available The generalized equal width (GEW wave equation is solved numerically by using lumped Galerkin approach with cubic B-spline functions. The proposed numerical scheme is tested by applying two test problems including single solitary wave and interaction of two solitary waves. In order to determine the performance of the algorithm, the error norms L2 and L∞ and the invariants I1, I2 and I3 are calculated. For the linear stability analysis of the numerical algorithm, von Neumann approach is used. As a result, the obtained findings show that the presented numerical scheme is preferable to some recent numerical methods.  

  8. Bivariate spline solution of time dependent nonlinear PDE for a population density over irregular domains.

    Science.gov (United States)

    Gutierrez, Juan B; Lai, Ming-Jun; Slavov, George

    2015-12-01

    We study a time dependent partial differential equation (PDE) which arises from classic models in ecology involving logistic growth with Allee effect by introducing a discrete weak solution. Existence, uniqueness and stability of the discrete weak solutions are discussed. We use bivariate splines to approximate the discrete weak solution of the nonlinear PDE. A computational algorithm is designed to solve this PDE. A convergence analysis of the algorithm is presented. We present some simulations of population development over some irregular domains. Finally, we discuss applications in epidemiology and other ecological problems. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Numerical solution of the controlled Duffing oscillator by semi-orthogonal spline wavelets

    International Nuclear Information System (INIS)

    Lakestani, M; Razzaghi, M; Dehghan, M

    2006-01-01

    This paper presents a numerical method for solving the controlled Duffing oscillator. The method can be extended to nonlinear calculus of variations and optimal control problems. The method is based upon compactly supported linear semi-orthogonal B-spline wavelets. The differential and integral expressions which arise in the system dynamics, the performance index and the boundary conditions are converted into some algebraic equations which can be solved for the unknown coefficients. Illustrative examples are included to demonstrate the validity and applicability of the technique

  10. Complex wavenumber Fourier analysis of the B-spline based finite element method

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Plešek, Jiří; Okrouhlík, Miloslav

    2014-01-01

    Roč. 51, č. 2 (2014), s. 348-359 ISSN 0165-2125 R&D Projects: GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315; GA ČR GPP101/10/P376; GA ČR GA101/09/1630 Institutional support: RVO:61388998 Keywords : elastic wave propagation * dispersion errors * B-spline * finite element method * isogeometric analysis Subject RIV: JR - Other Machinery Impact factor: 1.513, year: 2014 http://www.sciencedirect.com/science/article/pii/S0165212513001479

  11. Gaussian quadrature rules for C 1 quintic splines with uniform knot vectors

    KAUST Repository

    Bartoň, Michael

    2017-03-21

    We provide explicit quadrature rules for spaces of C1C1 quintic splines with uniform knot sequences over finite domains. The quadrature nodes and weights are derived via an explicit recursion that avoids numerical solvers. Each rule is optimal, that is, requires the minimal number of nodes, for a given function space. For each of nn subintervals, generically, only two nodes are required which reduces the evaluation cost by 2/32/3 when compared to the classical Gaussian quadrature for polynomials over each knot span. Numerical experiments show fast convergence, as nn grows, to the “two-third” quadrature rule of Hughes et al. (2010) for infinite domains.

  12. Steady-state solution of the PTC thermistor problem using a quadratic spline finite element method

    Directory of Open Access Journals (Sweden)

    Bahadir A. R.

    2002-01-01

    Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.

  13. Solid T-spline Construction from Boundary Triangulations with Arbitrary Genus Topology

    Science.gov (United States)

    2012-04-01

    sculpture model has genus two and (a) (b) (c) Fig. 11. The solid T-spline construction result for the “Eight” model. (a) The constructed solid T...1,536) 5,735 (16, 16) 200 1,440 (0.10, 1.00) 8.5 Sculpture 2 (8,635, 17,276) 10,549 (16, 16) 252 7,072 (0.09, 1.00) 41.5 (a) (b) (c) (d) (e) (f) (g) (h...isogeometric analysis result. 10 (a) (b) (c) (d) (e) (f) (g) (h) Fig. 14. Sculpture model with genus two. (a) The input boundary triangle mesh; (b) the

  14. Modelling stunting in LiST: the effect of applying smoothing to linear growth data

    Directory of Open Access Journals (Sweden)

    Simon Cousens

    2017-11-01

    Full Text Available Abstract Background The Lives Saved Tool (LiST is a widely used resource for evidence-based decision-making regarding health program scale-up in low- and middle-income countries. LiST estimates the impact of specified changes in intervention coverage on mortality and stunting among children under 5 years of age. We aimed to improve the estimates of the parameters in LiST that determine the rate at which the effects of interventions to prevent stunting attenuate as children get older. Methods We identified datasets with serial measurements of children’s lengths or heights and used random effects models and restricted cubic splines to model the growth trajectories of children with at least six serial length/height measurements. We applied WHO growth standards to both measured and modelled (smoothed lengths/heights to determine children’s stunting status at multiple ages (1, 6, 12, 24 months. We then calculated the odds ratios for the association of stunting at one age point with stunting at the next (“stunting-to-stunting ORs” using both measured and smoothed data points. We ran analyses in LiST to compare the impact on intervention effect attenuation of using smoothed rather than measured stunting-to-stunting ORs. Results A total of 21,786 children with 178,786 length/height measurements between them contributed to our analysis. The odds of stunting at a given age were strongly related to whether a child is stunted at an earlier age, using both measured and smoothed lengths/heights, although the relationship was stronger for smoothed than measured lengths/heights. Using smoothed lengths/heights, we estimated that children stunted at 1 month have 45 times the odds of being stunted at 6 months, with corresponding odds ratios of 362 for the period 6 to 12 months and 175 for the period 12 to 24 months. Using the odds ratios derived from the smoothed data in LiST resulted in a somewhat slower attenuation of intervention effects over time

  15. Flexible Bayesian Human Fecundity Models.

    Science.gov (United States)

    Kim, Sungduk; Sundaram, Rajeshwari; Buck Louis, Germaine M; Pyper, Cecilia

    2012-12-01

    Human fecundity is an issue of considerable interest for both epidemiological and clinical audiences, and is dependent upon a couple's biologic capacity for reproduction coupled with behaviors that place a couple at risk for pregnancy. Bayesian hierarchical models have been proposed to better model the conception probabilities by accounting for the acts of intercourse around the day of ovulation, i.e., during the fertile window. These models can be viewed in the framework of a generalized nonlinear model with an exponential link. However, a fixed choice of link function may not always provide the best fit, leading to potentially biased estimates for probability of conception. Motivated by this, we propose a general class of models for fecundity by relaxing the choice of the link function under the generalized nonlinear model framework. We use a sample from the Oxford Conception Study (OCS) to illustrate the utility and fit of this general class of models for estimating human conception. Our findings reinforce the need for attention to be paid to the choice of link function in modeling conception, as it may bias the estimation of conception probabilities. Various properties of the proposed models are examined and a Markov chain Monte Carlo sampling algorithm was developed for implementing the Bayesian computations. The deviance information criterion measure and logarithm of pseudo marginal likelihood are used for guiding the choice of links. The supplemental material section contains technical details of the proof of the theorem stated in the paper, and contains further simulation results and analysis.

  16. Bayesian Nonparametric Longitudinal Data Analysis.

    Science.gov (United States)

    Quintana, Fernando A; Johnson, Wesley O; Waetjen, Elaine; Gold, Ellen

    2016-01-01

    Practical Bayesian nonparametric methods have been developed across a wide variety of contexts. Here, we develop a novel statistical model that generalizes standard mixed models for longitudinal data that include flexible mean functions as well as combined compound symmetry (CS) and autoregressive (AR) covariance structures. AR structure is often specified through the use of a Gaussian process (GP) with covariance functions that allow longitudinal data to be more correlated if they are observed closer in time than if they are observed farther apart. We allow for AR structure by considering a broader class of models that incorporates a Dirichlet Process Mixture (DPM) over the covariance parameters of the GP. We are able to take advantage of modern Bayesian statistical methods in making full predictive inferences and about characteristics of longitudinal profiles and their differences across covariate combinations. We also take advantage of the generality of our model, which provides for estimation of a variety of covariance structures. We observe that models that fail to incorporate CS or AR structure can result in very poor estimation of a covariance or correlation matrix. In our illustration using hormone data observed on women through the menopausal transition, biology dictates the use of a generalized family of sigmoid functions as a model for time trends across subpopulation categories.

  17. BELM: Bayesian extreme learning machine.

    Science.gov (United States)

    Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J

    2011-03-01

    The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.

  18. Bifurcations of non-smooth systems

    Science.gov (United States)

    Angulo, Fabiola; Olivar, Gerard; Osorio, Gustavo A.; Escobar, Carlos M.; Ferreira, Jocirei D.; Redondo, Johan M.

    2012-12-01

    Non-smooth systems (namely piecewise-smooth systems) have received much attention in the last decade. Many contributions in this area show that theory and applications (to electronic circuits, mechanical systems, …) are relevant to problems in science and engineering. Specially, new bifurcations have been reported in the literature, and this was the topic of this minisymposium. Thus both bifurcation theory and its applications were included. Several contributions from different fields show that non-smooth bifurcations are a hot topic in research. Thus in this paper the reader can find contributions from electronics, energy markets and population dynamics. Also, a carefully-written specific algebraic software tool is presented.

  19. 2nd Bayesian Young Statisticians Meeting

    CERN Document Server

    Bitto, Angela; Kastner, Gregor; Posekany, Alexandra

    2015-01-01

    The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...

  20. Bayesian natural language semantics and pragmatics

    CERN Document Server

    Zeevat, Henk

    2015-01-01

    The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice's contributions to pragmatics or in interpretation by abduction.