Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping
2015-06-24
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
AnL1 smoothing spline algorithm with cross validation
Bosworth, Ken W.; Lall, Upmanu
1993-08-01
We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.
Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias
2015-01-01
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.
Directory of Open Access Journals (Sweden)
Hannes Matuschek
Full Text Available The Smoothing Spline ANOVA (SS-ANOVA requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.
Comparative Analysis for Robust Penalized Spline Smoothing Methods
Directory of Open Access Journals (Sweden)
Bin Wang
2014-01-01
Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.
Quintic spline smooth semi-supervised support vector classification machine
Institute of Scientific and Technical Information of China (English)
Xiaodan Zhang; Jinggai Ma; Aihua Li; Ang Li
2015-01-01
A semi-supervised vector machine is a relatively new learning method using both labeled and unlabeled data in classifi-cation. Since the objective function of the model for an unstrained semi-supervised vector machine is not smooth, many fast opti-mization algorithms cannot be applied to solve the model. In order to overcome the difficulty of dealing with non-smooth objective functions, new methods that can solve the semi-supervised vector machine with desired classification accuracy are in great demand. A quintic spline function with three-times differentiability at the ori-gin is constructed by a general three-moment method, which can be used to approximate the symmetric hinge loss function. The approximate accuracy of the quintic spline function is estimated. Moreover, a quintic spline smooth semi-support vector machine is obtained and the convergence accuracy of the smooth model to the non-smooth one is analyzed. Three experiments are performed to test the efficiency of the model. The experimental results show that the new model outperforms other smooth models, in terms of classification performance. Furthermore, the new model is not sensitive to the increasing number of the labeled samples, which means that the new model is more efficient.
P-splines with derivative based penalties and tensor product smoothing of unevenly distributed data
Wood, Simon N
2016-01-01
The P-splines of Eilers and Marx (Stat Sci 11:89–121, 1996) combine a B-spline basis with a discrete quadratic penalty on the basis coefficients, to produce a reduced rank spline like smoother. P-splines have three properties that make them very popular as reduced rank smoothers: (i) the basis and the penalty are sparse, enabling efficient computation, especially for Bayesian stochastic simulation; (ii) it is possible to flexibly ‘mix-and-match’ the order of B-spline basis and penalty, rather...
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data.
CLOSED SMOOTH SURFACE DEFINED FROM CUBIC TRIANGULAR SPLINES
Institute of Scientific and Technical Information of China (English)
Ren-zhong Feng; Ren-hong Wang
2005-01-01
In order to construct closed surfaces with continuous unit normal, we introduce a new spline space on an arbitrary closed mesh of three-sided faces. Our approach generalizes an idea of Goodman and is based on the concept of 'Geometric continuity' for piecewise polynomial parametrizations. The functions in the spline space restricted to the faces are cubic triangular polynomials. A basis of the spline space is constructed of positive functions which sum to 1. It is also shown that the space is suitable for interpolating data at the midpoints of the faces.
MortalitySmooth: An R Package for Smoothing Poisson Counts with P-Splines
Directory of Open Access Journals (Sweden)
Carlo G. Camarda
2012-07-01
Full Text Available The MortalitySmooth package provides a framework for smoothing count data in both one- and two-dimensional settings. Although general in its purposes, the package is specifically tailored to demographers, actuaries, epidemiologists, and geneticists who may be interested in using a practical tool for smoothing mortality data over ages and/or years. The total number of deaths over a specified age- and year-interval is assumed to be Poisson-distributed, and P-splines and generalized linear array models are employed as a suitable regression methodology. Extra-Poisson variation can also be accommodated.Structured in an S3 object orientation system, MortalitySmooth has two main functions which t the data and dene two classes of objects:Mort1Dsmooth and Mort2Dsmooth. The methods for these classes (print, summary, plot, predict, and residuals are also included. These features make it easy for users to extract and manipulate the outputs.In addition, a collection of mortality data is provided. This paper provides an overview of the design, aims, and principles of MortalitySmooth, as well as strategies for applying it and extending its use.
NEW PROOF OF DIMENSION FORMULA OF SPLINE SPACES OVER T-MESHES VIA SMOOTHING COFACTORS
Institute of Scientific and Technical Information of China (English)
Zhang-jin Huang; Jian-song Deng; Yu-yu Feng; Fa-lai Chen
2006-01-01
A T-mesh is basically a rectangular grid that allows T-junctions. Recently, Deng etal introduced splines over T-meshes, which are generalizations of T-splines invented by Sederberg etal, and proposed a dimension formula based on the B-net method. In this paper,we derive an equivalent dimension formula in a different form with the smoothing cofactor method.
Material approximation of data smoothing and spline curves inspired by slime mould
International Nuclear Information System (INIS)
The giant single-celled slime mould Physarum polycephalum is known to approximate a number of network problems via growth and adaptation of its protoplasmic transport network and can serve as an inspiration towards unconventional, material-based computation. In Physarum, predictable morphological adaptation is prevented by its adhesion to the underlying substrate. We investigate what possible computations could be achieved if these limitations were removed and the organism was free to completely adapt its morphology in response to changing stimuli. Using a particle model of Physarum displaying emergent morphological adaptation behaviour, we demonstrate how a minimal approach to collective material computation may be used to transform and summarise properties of spatially represented datasets. We find that the virtual material relaxes more strongly to high-frequency changes in data, which can be used for the smoothing (or filtering) of data by approximating moving average and low-pass filters in 1D datasets. The relaxation and minimisation properties of the model enable the spatial computation of B-spline curves (approximating splines) in 2D datasets. Both clamped and unclamped spline curves of open and closed shapes can be represented, and the degree of spline curvature corresponds to the relaxation time of the material. The material computation of spline curves also includes novel quasi-mechanical properties, including unwinding of the shape between control points and a preferential adhesion to longer, straighter paths. Interpolating splines could not directly be approximated due to the formation and evolution of Steiner points at narrow vertices, but were approximated after rectilinear pre-processing of the source data. This pre-processing was further simplified by transforming the original data to contain the material inside the polyline. These exemplary results expand the repertoire of spatially represented unconventional computing devices by demonstrating a
Directory of Open Access Journals (Sweden)
Zhang Xiaohua
2003-11-01
Full Text Available Abstract In the search for genetic determinants of complex disease, two approaches to association analysis are most often employed, testing single loci or testing a small group of loci jointly via haplotypes for their relationship to disease status. It is still debatable which of these approaches is more favourable, and under what conditions. The former has the advantage of simplicity but suffers severely when alleles at the tested loci are not in linkage disequilibrium (LD with liability alleles; the latter should capture more of the signal encoded in LD, but is far from simple. The complexity of haplotype analysis could be especially troublesome for association scans over large genomic regions, which, in fact, is becoming the standard design. For these reasons, the authors have been evaluating statistical methods that bridge the gap between single-locus and haplotype-based tests. In this article, they present one such method, which uses non-parametric regression techniques embodied by Bayesian adaptive regression splines (BARS. For a set of markers falling within a common genomic region and a corresponding set of single-locus association statistics, the BARS procedure integrates these results into a single test by examining the class of smooth curves consistent with the data. The non-parametric BARS procedure generally finds no signal when no liability allele exists in the tested region (ie it achieves the specified size of the test and it is sensitive enough to pick up signals when a liability allele is present. The BARS procedure provides a robust and potentially powerful alternative to classical tests of association, diminishes the multiple testing problem inherent in those tests and can be applied to a wide range of data types, including genotype frequencies estimated from pooled samples.
Zhang, X.; Liang, S.; Wang, G.
2015-12-01
Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.
Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.
Kainu, Annette; Timonen, Kirsi
2016-07-01
Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.
Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.
Kainu, Annette; Timonen, Kirsi
2016-07-01
Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values. PMID:27071737
Heyne, Matthias; Derrick, Donald
2015-12-01
Tongue surface measurements from midsagittal ultrasound scans are effectively arcs with deviations representing tongue shape, but smoothing-spline analysis of variances (SSANOVAs) assume variance around a horizontal line. Therefore, calculating SSANOVA average curves of tongue traces in Cartesian Coordinates [Davidson, J. Acoust. Soc. Am. 120(1), 407-415 (2006)] creates errors that are compounded at tongue tip and root where average tongue shape deviates most from a horizontal line. This paper introduces a method for transforming data into polar coordinates similar to the technique by Mielke [J. Acoust. Soc. Am. 137(5), 2858-2869 (2015)], but using the virtual origin of a radial ultrasound transducer as the polar origin-allowing data conversion in a manner that is robust against between-subject and between-session variability.
An Implementation of Bayesian Adaptive Regression Splines (BARS in C with S and R Wrappers
Directory of Open Access Journals (Sweden)
Garrick Wallstrom
2007-02-01
Full Text Available BARS (DiMatteo, Genovese, and Kass 2001 uses the powerful reversible-jump MCMC engine to perform spline-based generalized nonparametric regression. It has been shown to work well in terms of having small mean-squared error in many examples (smaller than known competitors, as well as producing visually-appealing fits that are smooth (filtering out high-frequency noise while adapting to sudden changes (retaining high-frequency signal. However, BARS is computationally intensive. The original implementation in S was too slow to be practical in certain situations, and was found to handle some data sets incorrectly. We have implemented BARS in C for the normal and Poisson cases, the latter being important in neurophysiological and other point-process applications. The C implementation includes all needed subroutines for fitting Poisson regression, manipulating B-splines (using code created by Bates and Venables, and finding starting values for Poisson regression (using code for density estimation created by Kooperberg. The code utilizes only freely-available external libraries (LAPACK and BLAS and is otherwise self-contained. We have also provided wrappers so that BARS can be used easily within S or R.
Directory of Open Access Journals (Sweden)
D. Reclik
2008-08-01
Full Text Available Purpose: The main reason of this paper was to prepare the system, which tests the use of elastic band for smoothing the collision-free trajectory. The aided robot off-line programming system is based on NURBS and B-Spline curves. Because there is a lot of information in references about using elastic band algorithm, authors decided to compare these two methods. The most important criterion in robotics is having the smoothest possible robot trajectory, so as a standard there the NURBS curves (C2 smooth class were used.Design/methodology/approach: Pascal language compiler was used for research. All algorithms were coded in this programming language and compiled. Results were set in Microsoft Excel worksheet.Findings: Results show that calculations, which were made with B-Spline method, have taken less time than calculations based on elastic band curves. Moreover, the elastic band method gave the smoothest curves but only in geometrical sense, which is less important (the first and second derivate are not continuous, which is the most important issue in presented case. That is why it was found that using the B-Spline algorithm is a better solution, because it takes less time and gives better quality results.Research limitations/implications: The MS Windows application was created, which generates smooth curves (in geometrical sense by marking the interpolation base points which are calculated by the collision-free movement planner. This application generates curves by using both presented methods - B-Spline and elastic band. Both of these curves were compared in regard of standard deviation and variance of B-Spline and elastic band.Practical implications: Because the elastic band algorithm takes a lot of time (three times longer than B-Spline it is not used in the final application. The authors used B-Spline method to make smoother and optimized trajectory in application for off-line collision-free robot programming.Originality/value: This is a new
Wahba, Grace
2004-01-01
Smoothing Spline ANOVA (SS-ANOVA) models in reproducing kernel Hilbert spaces (RKHS) provide a very general framework for data analysis, modeling and learning in a variety of fields. Discrete, noisy scattered, direct and indirect observations can be accommodated with multiple inputs and multiple possibly correlated outputs and a variety of meaningful structures. The purpose of this paper is to give a brief overview of the approach and describe and contrast a series of applications, while noti...
Directory of Open Access Journals (Sweden)
Saad Bakkali
2010-04-01
Full Text Available This paper focuses on presenting a method which is able to filter out noise and suppress outliers of sampled real functions under fairly general conditions. The automatic optimal spline-smoothing approach automatically determi-nes how a cubic spline should be adjusted in a least-squares optimal sense from an a priori selection of the number of points defining an adjusting spline, but not their location on that curve. The method is fast and easily allowed for selecting several knots, thereby adding desirable flexibility to the procedure. As an illustration, we apply the AOSSA method to Moroccan resistivity data phosphate deposit “disturbances” map. The AOSSA smoothing method is an e-fficient tool in interpreting geophysical potential field data which is particularly suitable in denoising, filtering and a-nalysing resistivity data singularities. The AOSSA smoothing and filtering approach was found to be consistently use-ful when applied to modeling surface phosphate “disturbances.”.
Morrissey, Edward R; Juárez, Miguel A; Denby, Katherine J; Burroughs, Nigel J
2011-10-01
We propose a semiparametric Bayesian model, based on penalized splines, for the recovery of the time-invariant topology of a causal interaction network from longitudinal data. Our motivation is inference of gene regulatory networks from low-resolution microarray time series, where existence of nonlinear interactions is well known. Parenthood relations are mapped by augmenting the model with kinship indicators and providing these with either an overall or gene-wise hierarchical structure. Appropriate specification of the prior is crucial to control the flexibility of the splines, especially under circumstances of scarce data; thus, we provide an informative, proper prior. Substantive improvement in network inference over a linear model is demonstrated using synthetic data drawn from ordinary differential equation models and gene expression from an experimental data set of the Arabidopsis thaliana circadian rhythm.
Penalized Splines for Smooth Representation of High-dimensional Monte Carlo Datasets
Whitehorn, Nathan; Lafebre, Sven
2013-01-01
Detector response to a high-energy physics process is often estimated by Monte Carlo simulation. For purposes of data analysis, the results of this simulation are typically stored in large multi-dimensional histograms, which can quickly become both too large to easily store and manipulate and numerically problematic due to unfilled bins or interpolation artifacts. We describe here an application of the penalized spline technique to efficiently compute B-spline representations of such tables and discuss aspects of the resulting B-spline fits that simplify many common tasks in handling tabulated Monte Carlo data in high-energy physics analysis, in particular their use in maximum-likelihood fitting.
Bayesian Smoothing Algorithms in Partially Observed Markov Chains
Ait-el-Fquih, Boujemaa; Desbouvries, François
2006-11-01
Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.
Data assimilation using Bayesian filters and B-spline geological models
Duan, Lian
2011-04-01
This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.
Bayesian Penalized Spline Models for the Analysis of Spatio-Temporal Count Data
Bauer, Cici; Wakefield, Jon; Rue, Håvard; Self, Steve; Feng, Zijian; Wang, Yu
2016-01-01
In recent years, the availability of infectious disease counts in time and space has increased, and consequently there has been renewed interest in model formulation for such data. In this paper, we describe a model that was motivated by the need to analyze hand, foot and mouth disease (HFMD) surveillance data in China. The data are aggregated by geographical areas and by week, with the aims of the analysis being to gain insight into the space-time dynamics and to make short-term prediction to implement public health campaigns in those areas with a large predicted disease burden. The model we develop decomposes disease risk into marginal spatial and temporal components, and a space-time interaction piece. The latter is the crucial element, and we use a tensor product spline model with a Markov random field prior on the coefficients of the basis functions. The model can be formulated as a Gaussian Markov random field and so fast computation can be carried out using the integrated nested Laplace approximation (INLA) approach. A simulation study shows that the model can pick up complex space-time structure and our analysis of HFMD data in the central north region of China provides new insights into the dynamics of the disease. PMID:26530705
International Nuclear Information System (INIS)
With the development of an implantable radio transmitter system, direct measurement of cardiac autonomic nervous activities (CANAs) became possible for ambulatory animals for a couple of months. However, measured CANAs include not only CANA but also cardiac electric activity (CEA) that can affect the quantification of CANAs. In this study, we propose a novel CEA removal method using moving standard deviation and cubic smoothing spline. This method consisted of two steps of detecting CEA segments and eliminating CEAs in detected segments. Using implanted devices, we recorded stellate ganglion nerve activity (SGNA), vagal nerve activity (VNA) and superior left ganglionated plexi nerve activity (SLGPNA) directly from four ambulatory dogs. The CEA-removal performance of the proposed method was evaluated and compared with commonly used high-pass filtration (HPF) for various heart rates and CANA amplitudes. Results tested with simulated CEA and simulated true CANA revealed stable and excellent performance of the suggested method compared to the HPF method. The averaged relative error percentages of the proposed method were less than 0.67%, 0.65% and 1.76% for SGNA, VNA and SLGPNA, respectively. (paper)
Rate-optimal Bayesian intensity smoothing for inhomogeneous Poisson processes
E. Belitser; P. Serra; H. van Zanten
2015-01-01
We apply nonparametric Bayesian methods to study the problem of estimating the intensity function of an inhomogeneous Poisson process. To motivate our results we start by analyzing count data coming from a call center which we model as a Poisson process. This analysis is carried out using a certain
Knott, Gary D
2000-01-01
A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...
Energy Technology Data Exchange (ETDEWEB)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
The impact of spatial scales and spatial smoothing on the outcome of bayesian spatial model.
Directory of Open Access Journals (Sweden)
Su Yun Kang
Full Text Available Discretization of a geographical region is quite common in spatial analysis. There have been few studies into the impact of different geographical scales on the outcome of spatial models for different spatial patterns. This study aims to investigate the impact of spatial scales and spatial smoothing on the outcomes of modelling spatial point-based data. Given a spatial point-based dataset (such as occurrence of a disease, we study the geographical variation of residual disease risk using regular grid cells. The individual disease risk is modelled using a logistic model with the inclusion of spatially unstructured and/or spatially structured random effects. Three spatial smoothness priors for the spatially structured component are employed in modelling, namely an intrinsic Gaussian Markov random field, a second-order random walk on a lattice, and a Gaussian field with Matérn correlation function. We investigate how changes in grid cell size affect model outcomes under different spatial structures and different smoothness priors for the spatial component. A realistic example (the Humberside data is analyzed and a simulation study is described. Bayesian computation is carried out using an integrated nested Laplace approximation. The results suggest that the performance and predictive capacity of the spatial models improve as the grid cell size decreases for certain spatial structures. It also appears that different spatial smoothness priors should be applied for different patterns of point data.
Wei Zeng; Muhammad Razib; Abdur Bin Shahid
2015-01-01
Conventional splines offer powerful means for modeling surfaces and volumes in three-dimensional Euclidean space. A one-dimensional quaternion spline has been applied for animation purpose, where the splines are defined to model a one-dimensional submanifold in the three-dimensional Lie group. Given two surfaces, all of the diffeomorphisms between them form an infinite dimensional manifold, the so-called diffeomorphism space. In this work, we propose a novel scheme to model finite dimensional...
Directory of Open Access Journals (Sweden)
Wei Zeng
2015-04-01
Full Text Available Conventional splines offer powerful means for modeling surfaces and volumes in three-dimensional Euclidean space. A one-dimensional quaternion spline has been applied for animation purpose, where the splines are defined to model a one-dimensional submanifold in the three-dimensional Lie group. Given two surfaces, all of the diffeomorphisms between them form an infinite dimensional manifold, the so-called diffeomorphism space. In this work, we propose a novel scheme to model finite dimensional submanifolds in the diffeomorphism space by generalizing conventional splines. According to quasiconformal geometry theorem, each diffeomorphism determines a Beltrami differential on the source surface. Inversely, the diffeomorphism is determined by its Beltrami differential with normalization conditions. Therefore, the diffeomorphism space has one-to-one correspondence to the space of a special differential form. The convex combination of Beltrami differentials is still a Beltrami differential. Therefore, the conventional spline scheme can be generalized to the Beltrami differential space and, consequently, to the diffeomorphism space. Our experiments demonstrate the efficiency and efficacy of diffeomorphism splines. The diffeomorphism spline has many potential applications, such as surface registration, tracking and animation.
Comparing measures of model selection for penalized splines in Cox models
Malloy, Elizabeth J.; Spiegelman, Donna; Eisen, Ellen A
2009-01-01
This article presents an application and a simulation study of model fit criteria for selecting the optimal degree of smoothness for penalized splines in Cox models. The criteria considered were the Akaike information criterion, the corrected AIC, two formulations of the Bayesian information criterion, and a generalized cross-validation method. The estimated curves selected by the five methods were compared to each other in a study of rectal cancer mortality in autoworkers. In the stimulation...
THE STRUCTURAL CHARACTERIZATION AND LOCALLY SUPPORTED BASES FOR BIVARIATE SUPER SPLINES
Institute of Scientific and Technical Information of China (English)
Zhi-qiang Xu; Ren-hong Wang
2004-01-01
Super splines are bivariate splines defined on triangulations, where the smoothness enforced at the vertices is larger than the smoothness enforced across the edges. In this paper, the smoothness conditions and conformality conditions for super splines are presented.Three locally supported super splines on type-1 triangulation are presented. Moreover, the criteria to select local bases is also given. By using local supported super spline function, avariation-diminishing operator is built. The approximation properties of the operator are also presented.
Schwarz and multilevel methods for quadratic spline collocation
Energy Technology Data Exchange (ETDEWEB)
Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)
1994-12-31
Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.
Number systems, α-splines and refinement
Zube, Severinas
2004-12-01
This paper is concerned with the smooth refinable function on a plane relative with complex scaling factor . Characteristic functions of certain self-affine tiles related to a given scaling factor are the simplest examples of such refinable function. We study the smooth refinable functions obtained by a convolution power of such charactericstic functions. Dahlke, Dahmen, and Latour obtained some explicit estimates for the smoothness of the resulting convolution products. In the case α=1+i, we prove better results. We introduce α-splines in two variables which are the linear combination of shifted basic functions. We derive basic properties of α-splines and proceed with a detailed presentation of refinement methods. We illustrate the application of α-splines to subdivision with several examples. It turns out that α-splines produce well-known subdivision algorithms which are based on box splines: Doo-Sabin, Catmull-Clark, Loop, Midedge and some -subdivision schemes with good continuity. The main geometric ingredient in the definition of α-splines is the fundamental domain (a fractal set or a self-affine tile). The properties of the fractal obtained in number theory are important and necessary in order to determine two basic properties of α-splines: partition of unity and the refinement equation.
Owusu-Edusei, Kwame; Owens, Chantelle J
2009-01-01
Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races) from the National Electronic Telecommunications System for Surveillance (NETSS) for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents) than its contiguous neighbors (195 or less) in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379). The relative change in smoothed chlamydia rates in Newton county was significantly (p < 0.05) higher than its contiguous neighbors. Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time. PMID:19245686
Directory of Open Access Journals (Sweden)
Owens Chantelle J
2009-02-01
Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.
Image edges detection through B-Spline filters
International Nuclear Information System (INIS)
B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab
Average of Distribution and Remarks on Box-Splines
Institute of Scientific and Technical Information of China (English)
LI Yue-sheng
2001-01-01
A class of generalized moving average operators is introduced, and the integral representations of an average function are provided. It has been shown that the average of Dirac δ-distribution is just the well known box-spline. Some remarks on box-splines, such as their smoothness and the corresponding partition of unity, are made. The factorization of average operators is derived. Then, the subdivision algorithm for efficient computing of box-splines and their linear combinations follows.
Directory of Open Access Journals (Sweden)
Hannu Olkkonen
2013-01-01
Full Text Available In this work we introduce a new family of splines termed as gamma splines for continuous signal approximation and multiresolution analysis. The gamma splines are born by -times convolution of the exponential by itself. We study the properties of the discrete gamma splines in signal interpolation and approximation. We prove that the gamma splines obey the two-scale equation based on the polyphase decomposition. to introduce the shift invariant gamma spline wavelet transform for tree structured subscale analysis of asymmetric signal waveforms and for systems with asymmetric impulse response. Especially we consider the applications in biomedical signal analysis (EEG, ECG, and EMG. Finally, we discuss the suitability of the gamma spline signal processing in embedded VLSI environment.
International Nuclear Information System (INIS)
We present, in this paper, a new unsupervised method for joint image super-resolution and separation between smooth and point sources. For this purpose, we propose a Bayesian approach with a Markovian model for the smooth part and Student’s t-distribution for point sources. All model and noise parameters are considered unknown and should be estimated jointly with images. However, joint estimators (joint MAP or posterior mean) are intractable and an approximation is needed. Therefore, a new gradient-like variational Bayesian method is applied to approximate the true posterior by a free-form separable distribution. A parametric form is obtained by approximating marginals but with form parameters that are mutually dependent. Their optimal values are achieved by iterating them till convergence. The method was tested by the model-generated data and a real dataset from the Herschel space observatory. (paper)
Owens Chantelle J; Owusu-Edusei Kwame
2009-01-01
Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, g...
SOME RECURRENCE FORMULAS FOR BOX SPLINES AND CONE SPLINES
Institute of Scientific and Technical Information of China (English)
Patrick J. Van Fleet
2004-01-01
A degree elevation formula for multivariate simplex splines was given by Micchelli[6] and extended to hold for multivariate Dirichlet splines in [8]. We report similar formulae for multivariate cone splines and box splines. To this end, we utilize a relation due to Dahmen and Micchelli[4] that connects box splines and cone splines and a degree reduction formula given by Cohen, Lyche, and Riesenfeld in [2].
SOME RECURRENCE FORMULAS FOR BOX SPLINES AND CONE SPLINES
Institute of Scientific and Technical Information of China (English)
Patrick J. Van Fleet
2002-01-01
A degree elevation formula for multivariate simplex splines was given by Micchelli [6] and extended to hold for multivariate Dirichlet splines in [8]. We report similar formulae for multivariate cone splines and box_splines. To this end, we utilize a relation due to Dahmen and Micchelli [4] that connects box splines and cone splines and a degree reduction formula given by Cohen, Lyche, and Riesenfeld in [2].
LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA
Directory of Open Access Journals (Sweden)
T. Dokken
2015-08-01
Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.
Color management with a hammer: the B-spline fitter
Bell, Ian E.; Liu, Bonny H. P.
2003-01-01
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
Institute of Scientific and Technical Information of China (English)
QIN GuoYou; ZHU ZhongYi
2009-01-01
In this paper, we study the local asymptotic behavior of the regression spline estimator in the framework of marginal semiparametric model. Similarly to Zhu, Fung and He (2008), we give explicit expression for the asymptotic bias of regression spline estimator for nonparametric function f. Our results also show that the asymptotic bias of the regression spline estimator does not depend on the working covariance matrix, which distinguishes the regression splines from the smoothing splines and the seemingly unrelated kernel. To understand the local bias result of the regression spline estimator, we show that the regression spline estimator can be obtained iteratively by applying the standard weighted least squares regression spline estimator to pseudo-observations. At each iteration, the bias of the estimator is unchanged and only the variance is updated.
Institute of Scientific and Technical Information of China (English)
2009-01-01
In this paper, we study the local asymptotic behavior of the regression spline estimator in the framework of marginal semiparametric model. Similarly to Zhu, Fung and He (2008), we give explicit expression for the asymptotic bias of regression spline estimator for nonparametric function f. Our results also show that the asymptotic bias of the regression spline estimator does not depend on the working covariance matrix, which distinguishes the regression splines from the smoothing splines and the seemingly unrelated kernel. To understand the local bias result of the regression spline estimator, we show that the regression spline estimator can be obtained iteratively by applying the standard weighted least squares regression spline estimator to pseudo-observations. At each iteration, the bias of the estimator is unchanged and only the variance is updated.
A B-spline active contour model based on finite element method
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
A B-spline active contour model based on finite element method is presented, into which the advantages of a B-spline active contour attributing to its fewer parameters and its smoothness is built accompanied with reduced computational complexity and better numerical stability resulted from the finite element method. In this model, a cubic B-spline segment is taken as an element, and the finite element method is adopted to solve the energy minimization problem of the B-spline active contour, thus to implement image segmentation. Experiment results verify that this method is efficient for B-spline active contour, which attains stable, accurate and faster convergence.
Multivariate Spline Algorithms for CAGD
Boehm, W.
1985-01-01
Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.
Single authentication: exposing weighted splining artifacts
Ciptasari, Rimba W.
2016-05-01
A common form of manipulation is to combine parts of the image fragment into another different image either to remove or blend the objects. Inspired by this situation, we propose a single authentication technique for detecting traces of weighted average splining technique. In this paper, we assume that image composite could be created by joining two images so that the edge between them is imperceptible. The weighted average technique is constructed from overlapped images so that it is possible to compute the gray level value of points within a transition zone. This approach works on the assumption that although splining process leaves the transition zone smoothly. They may, nevertheless, alter the underlying statistics of an image. In other words, it introduces specific correlation into the image. The proposed idea dealing with identifying these correlations is to generate an original model of both weighting function, left and right functions, as references to their synthetic models. The overall process of the authentication is divided into two main stages, which are pixel predictive coding and weighting function estimation. In the former stage, the set of intensity pairs {Il,Ir} is computed by exploiting pixel extrapolation technique. The least-squares estimation method is then employed to yield the weighted coefficients. We show the efficacy of the proposed scheme on revealing the splining artifacts. We believe that this is the first work that exposes the image splining artifact as evidence of digital tampering.
THE INSTABILITY DEGREE IN THE DIEMNSION OF SPACES OF BIVARIATE SPLINE
Institute of Scientific and Technical Information of China (English)
Zhiqiang Xu; Renhong Wang
2002-01-01
In this paper, the dimension of the spaces of bivariate spline with degree less that 2r and smoothness order r on the Morgan-Scott triangulation is considered. The concept of the instability degree in the dimension of spaces of bivariate spline is presented. The results in the paper make us conjecture the instability degree in the dimension of spaces of bivariate spline is infinity.
How to fly an aircraft with control theory and splines
Karlsson, Anders
1994-01-01
When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.
Adaptive Parametrization of Multivariate B-splines for Image Registration
DEFF Research Database (Denmark)
Hansen, Michael Sass; Glocker, Benjamin; Navab, Nassir;
2008-01-01
We present an adaptive parametrization scheme for dynamic mesh refinement in the application of parametric image registration. The scheme is based on a refinement measure ensuring that the control points give an efficient representation of the warp fields, in terms of minimizing the registration...... cost function. In the current work we introduce multivariate B-splines as a novel alternative to the widely used tensor B-splines enabling us to make efficient use of the derived measure.The multivariate B-splines of order n are Cn- 1 smooth and are based on Delaunay configurations of arbitrary 2D or 3...... reside on a regular grid. In contrast, by efficient non- constrained placement of the knots, the multivariate B- splines are shown to give a good representation of inho- mogeneous objects in natural settings. The wide applicability of the method is illustrated through its application on medical data and...
International Nuclear Information System (INIS)
1 - Description of program or function: The three programs SPLPKG, WFCMPR, and WFAPPX provide the capability for interactively generating, comparing and approximating Wilson-Fowler Splines. The Wilson-Fowler spline is widely used in Computer Aided Design and Manufacturing (CAD/CAM) systems. It is favored for many applications because it produces a smooth, low curvature fit to planar data points. Program SPLPKG generates a Wilson-Fowler spline passing through given nodes (with given end conditions) and also generates a piecewise linear approximation to that spline within a user-defined tolerance. The program may be used to generate a 'desired' spline against which to compare other Splines generated by CAD/CAM systems. It may also be used to generate an acceptable approximation to a desired spline in the event that an acceptable spline cannot be generated by the receiving CAD/CAM system. SPLPKG writes an IGES file of points evaluated on the spline and/or a file containing the spline description. Program WFCMPR computes the maximum difference between two Wilson-Fowler Splines and may be used to verify the spline recomputed by a receiving system. It compares two Wilson-Fowler Splines with common nodes and reports the maximum distance between curves (measured perpendicular to segments) and the maximum difference of their tangents (or normals), both computed along the entire length of the Splines. Program WFAPPX computes the maximum difference between a Wilson- Fowler spline and a piecewise linear curve. It may be used to accept or reject a proposed approximation to a desired Wilson-Fowler spline, even if the origin of the approximation is unknown. The maximum deviation between these two curves, and the parameter value on the spline where it occurs are reported. 2 - Restrictions on the complexity of the problem - Maxima of: 1600 evaluation points (SPLPKG), 1000 evaluation points (WFAPPX), 1000 linear curve breakpoints (WFAPPX), 100 spline Nodes
Directory of Open Access Journals (Sweden)
S. Abhishek
2016-07-01
Full Text Available It is well understood that in any data acquisition system reduction in the amount of data reduces the time and energy, but the major trade-off here is the quality of outcome normally, lesser the amount of data sensed, lower the quality. Compressed Sensing (CS allows a solution, for sampling below the Nyquist rate. The challenging problem of increasing the reconstruction quality with less number of samples from an unprocessed data set is addressed here by the use of representative coordinate selected from different orders of splines. We have made a detailed comparison with 10 orthogonal and 6 biorthogonal wavelets with two sets of data from MIT Arrhythmia database and our results prove that the Spline coordinates work better than the wavelets. The generation of two new types of splines such as exponential and double exponential are also briefed here .We believe that this is one of the very first attempts made in Compressed Sensing based ECG reconstruction problems using raw data.
Some extremal properties of multivariate polynomial splines in the metric Lp (Rd )
Institute of Scientific and Technical Information of China (English)
刘永平; 许贵桥
2001-01-01
We constructed a kind of continuous multivariate spline operators as the approximation tools of the multivariate functions on the Bd instead of the usual multivariate cardinal interpolation oper-ators of splines, and obtained the approximation error by this kind of spline operators. Meantime, by the results, we also obtained that the spaces of multivariate polynomial splines are weakly asymptoti-cally optimal for the Kolmogorov widths and the linear widths of some anisotropic Sobolev classes of smooth functions on Bd in the metric Lp(Bd).
Connecting the Dots Parametrically: An Alternative to Cubic Splines.
Hildebrand, Wilbur J.
1990-01-01
Discusses a method of cubic splines to determine a curve through a series of points and a second method for obtaining parametric equations for a smooth curve that passes through a sequence of points. Procedures for determining the curves and results of each of the methods are compared. (YP)
Spline screw payload fastening system
Vranish, John M. (Inventor)
1993-01-01
A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the
Generalized fairing algorithm of parametric cubic splines
Institute of Scientific and Technical Information of China (English)
WANG Yuan-jun; CAO Yuan
2006-01-01
Kjellander has reported an algorithm for fairing uniform parametric cubic splines. Poliakoff extended Kjellander's algorithm to non-uniform case. However, they merely changed the bad point's position, and neglected the smoothing of tangent at bad point. In this paper, we present a fairing algorithm that both changed point's position and its corresponding tangent vector. The new algorithm possesses the minimum property of energy. We also proved Poliakoff's fairing algorithm is a deduction of our fairing algorithm. Several fairing examples are given in this paper.
Spline techniques for magnetic fields
International Nuclear Information System (INIS)
This report is an overview of B-spline techniques, oriented toward magnetic field computation. These techniques form a powerful mathematical approximating method for many physics and engineering calculations. In section 1, the concept of a polynomial spline is introduced. Section 2 shows how a particular spline with well chosen properties, the B-spline, can be used to build any spline. In section 3, the description of how to solve a simple spline approximation problem is completed, and some practical examples of using splines are shown. All these sections deal exclusively in scalar functions of one variable for simplicity. Section 4 is partly digression. Techniques that are not B-spline techniques, but are closely related, are covered. These methods are not needed for what follows, until the last section on errors. Sections 5, 6, and 7 form a second group which work toward the final goal of using B-splines to approximate a magnetic field. Section 5 demonstrates how to approximate a scalar function of many variables. The necessary mathematics is completed in section 6, where the problems of approximating a vector function in general, and a magnetic field in particular, are examined. Finally some algorithms and data organization are shown in section 7. Section 8 deals with error analysis
Institute of Scientific and Technical Information of China (English)
吴泽福
2012-01-01
Based on the comparision of basic static estimate methods of term structure of interest rate (TSIR), we improved B-spline function estimate method, which involved optimization on estimation programmes, node numbers choice, and node placement design. To overcome the subjective effect of B-spline node distribution and C2 smoothness condition of discount function, we introduced negative exponential smoothness cubic Li-spline optimization technology with minimum constraint function of estimation error from quadratic sum to absolute value and minimum volatility of discount function, to increase the estimation reliability and prediction ability of short-term interest rate's volatility structure mutation, improve the advantage on depicting the long-term interest rate volatility trend, and reduce the excessive volatility of discount function.%通过对比国内外利率期限结构静态估计模型的优劣,分析节点数目变化和定位改进B样条函数对利率期限结构静态估计的误差,构建最小化定价误差的节点组合布局搜索程序,并引入负指数平滑立方L1样条优化模型,将误差函数最小化结构从平方和最小化转化为误差距离最小化,权衡拟合误差绝对距离最小化与贴现函数波动性约束,克服B样条函数对节点数目与定位的人工干预和放宽对贴现函数的二阶平滑要求,保留B样条函数刻画中长期利率波动趋势的优势,增强对短期利率波动结构突变的估计和预测能力,提高定价精确度和缓解利率期限结构曲线的过度波动问题.
Control theoretic splines optimal control, statistical, and path planning
Egerstedt, Magnus
2010-01-01
Splines, both interpolatory and smoothing, have a long and rich history that has largely been application driven. This book unifies these constructions in a comprehensive and accessible way, drawing from the latest methods and applications to show how they arise naturally in the theory of linear control systems. Magnus Egerstedt and Clyde Martin are leading innovators in the use of control theoretic splines to bring together many diverse applications within a common framework. In this book, they begin with a series of problems ranging from path planning to statistics to approximation.
Piecewise linear regression splines with hyperbolic covariates
International Nuclear Information System (INIS)
Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)
Free-Form Deformation with Rational DMS-Spline Volumes
Institute of Scientific and Technical Information of China (English)
Gang Xu; Guo-Zhao Wang; Xiao-Diao Chen
2008-01-01
In this paper, we propose a novel free-form deformation (FFD) technique, RDMS-FFD (Rational DMS-FFD),based on rational DMS-spline volumes. RDMS-FFD inherits some good properties of rational DMS-spline volumes and combines more deformation techniques than previous FFD methods in a consistent framework, such as local deformation,control lattice of arbitrary topology, smooth deformation, multiresolution deformation and direct manipulation of deforma-tion. We first introduce the rational DMS-spline volume by directly generalizing the previous results related to DMS-splies.How to generate a tetrahedral domain that approximates the shape of the object to be deformed is also introduced in this paper. Unlike the traditional FFD techniques, we manipulate the vertices of the tetrahedral domain to achieve deformation results. Our system demonstrates that RDMS-FFD is powerful and intuitive in geometric modeling.
Penalized Spline: a General Robust Trajectory Model for ZIYUAN-3 Satellite
Pan, H.; Zou, Z.
2016-06-01
Owing to the dynamic imaging system, the trajectory model plays a very important role in the geometric processing of high resolution satellite imagery. However, establishing a trajectory model is difficult when only discrete and noisy data are available. In this manuscript, we proposed a general robust trajectory model, the penalized spline model, which could fit trajectory data well and smooth noise. The penalized parameter λ controlling the smooth and fitting accuracy could be estimated by generalized cross-validation. Five other trajectory models, including third-order polynomials, Chebyshev polynomials, linear interpolation, Lagrange interpolation and cubic spline, are compared with the penalized spline model. Both the sophisticated ephemeris and on-board ephemeris are used to compare the orbit models. The penalized spline model could smooth part of noise, and accuracy would decrease as the orbit length increases. The band-to-band misregistration of ZiYuan-3 Dengfeng and Faizabad multispectral images is used to evaluate the proposed method. With the Dengfeng dataset, the third-order polynomials and Chebyshev approximation could not model the oscillation, and introduce misregistration of 0.57 pixels misregistration in across-track direction and 0.33 pixels in along-track direction. With the Faizabad dataset, the linear interpolation, Lagrange interpolation and cubic spline model suffer from noise, introducing larger misregistration than the approximation models. Experimental results suggest the penalized spline model could model the oscillation and smooth noise.
Straight-sided Spline Optimization
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2011-01-01
and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using design modifications, that do not change the spline load carrying capacity, it is shown that large...... reductions in the maximum stress are possible. Design modifications are given as simple analytical functions (modified super elliptical shape) with only two active design parameters and the designs are practical realizable....
Splines, contours and SVD subroutines
International Nuclear Information System (INIS)
Portability of Fortran code is a major concern these days, since hardware and commercial software change faster than the codes themselves. Hence, using public domain, portable, mathematical subroutines is imperative. Here we present a collection of subroutines we have used in the past, and found to be particularly useful. They are: 2-dimensional splines, contour tracing of flux surface (based on 2-D spline), and singular Value Matrix Decomposition (for Chi-square minimization)
RECONSTRUCTION OF LAYER DATA WITH DEFORMABLE B-SPLINES
Institute of Scientific and Technical Information of China (English)
Cheng Siyuan; Zhang Xiangwei; Xiong Hanwei
2005-01-01
A new B-spline surface reconstruction method from layer data based on deformable model is presented. An initial deformable surface, which is represented as a closed cylinder, is firstly given. The surface is subject to internal forces describing its implicit smoothness property and external forces attracting it toward the layer data points. And then finite element method is adopted to solve its energy minimization problem, which results a bicubic closed B-spline surface with C2 continuity. The proposed method can provide a smoothness and accurate surface model directly from the layer data, without the need to fit cross-sectional curves and make them compatible. The feasibility of the proposed method is verified by the experimental results.
A Large Sample Study of the Bayesian Bootstrap
Lo, Albert Y.
1987-01-01
An asymptotic justification of the Bayesian bootstrap is given. Large-sample Bayesian bootstrap probability intervals for the mean, the variance and bands for the distribution, the smoothed density and smoothed rate function are also provided.
Isogeometric analysis using T-splines
Bazilevs, Yuri
2010-01-01
We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.
Symmetric, discrete fractional splines and Gabor systems
DEFF Research Database (Denmark)
Søndergaard, Peter Lempel
2006-01-01
In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the...
HILBERTIAN APPROACH FOR UNIVARIATE SPLINE WITH TENSION
Institute of Scientific and Technical Information of China (English)
A.Bouhamidi
2001-01-01
In this work,a new approach is proposed for constructing splines with tension.The basic idea is in the use of distributions theory,which allows us to define suitable Hilbert spaces in which the tension spline minimizes some energy functional.Classical orthogonal conditions and characterizations of the spline in terms of a fundamental solution of a differential operator are provided.An explicit representation of the tension spline is given.The tension spline can be computed by solving a linear system.Some numerical examples are given to illustrate this approach.
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Spline and spline wavelet methods with applications to signal and image processing
Averbuch, Amir Z; Zheludev, Valery A
This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...
Institute of Scientific and Technical Information of China (English)
朱慧明; 周峰; 曾昭法; 李荣; 游万海
2015-01-01
In the method of testing smooth transition cointegration, estimating parameters are uncertain and the problem of cointegration test is complex.This paper proposes a smooth transition regression model and conducts a Bayesian nonlinear cointegration analysis.Based on the selection of parameters prior of the model and the charac-teristics of the posterior conditional distributions of the parameters, Metropolis-Hasting within Gibbs sampling algorithm is designed to estimate the parameters and bayesian unit root test is utilized to test the stationarity of regression residual, addressing the uncertainty of parameters estimation and the complexity of cointegration test. At the same time, the research applies exchange rate of RMB against U.S.dollar and interest rate differential between China and U.S.to conduct an empirical analysis.The research outcome indicates that MH-Gibbs can effectively a estimate the parameters of the smooth transition model, and we find there is smooth transition cointe-gration relationship between exchange rate fluctuation and interest rate differential.%针对平滑转移模型参数估计不确定性导致的协整检验方法相对复杂问题，提出基于平滑转移模型的贝叶斯非线性协整分析。通过模型的统计结构分析，选择参数先验分布，结合参数的后验条件分布特征设计Me-tropolis-Hasting-Gibbs混合抽样方案，据此估计平滑转移模型的参数，并对回归残差进行贝叶斯单位根检验，解决参数估计过程中遇到的参数估计不确定性及协整检验复杂的问题；利用人民币对美元汇率与中美两国的利率数据进行实证分析。研究结果表明：MH-Gibbs抽样方案能够有效估计平滑转移模型的参数，中美汇率波动和利差之间存在平滑转移协整关系。
Directory of Open Access Journals (Sweden)
Ilyasov R. H.
2014-10-01
Full Text Available The energy market shows strong exposure to seasonal fluctuations. A striking example of the impact of seasonality is the dynamics of the production of natural and associated gas in Russia. We use two approaches to the identification and analysis of seasonality: classical econometric based on different smoothing procedure; spline method uses an approximation of the economic dynamics of cubic splines and phase analysis. In the comparison of the two methods are used to identify the benefits of using spline functions when modeling economic dynamics and phase analysis of seasonality
2-rational Cubic Spline Involving Tension Parameters
Indian Academy of Sciences (India)
M Shrivastava; J Joseph
2000-08-01
In the present paper, 1-piecewise rational cubic spline function involving tension parameters is considered which produces a monotonic interpolant to a given monotonic data set. It is observed that under certain conditions the interpolant preserves the convexity property of the data set. The existence and uniqueness of a 2-rational cubic spline interpolant are established. The error analysis of the spline interpolant is also given.
B-spline techniques for volatility modeling
Corlay, Sylvain
2013-01-01
This paper is devoted to the application of B-splines to volatility modeling, specifically the calibration of the leverage function in stochastic local volatility models and the parameterization of an arbitrage-free implied volatility surface calibrated to sparse option data. We use an extension of classical B-splines obtained by including basis functions with infinite support. We first come back to the application of shape-constrained B-splines to the estimation of conditional expectations, ...
Quadrotor system identification using the multivariate multiplex b-spline
Visser, T.; De Visser, C.C.; Van Kampen, E.J.
2015-01-01
A novel method for aircraft system identification is presented that is based on a new multivariate spline type; the multivariate multiplex B-spline. The multivariate multiplex B-spline is a generalization of the recently introduced tensor-simplex B-spline. Multivariate multiplex splines obtain simil
Comparison of CSC method and the B-net method for deducing smoothness condition
Institute of Scientific and Technical Information of China (English)
Renhong Wang; Kai Qu
2009-01-01
The first author of this paper established an approach to study the multivariate spline over arbitrary partition,and presented the so-called conformality method of smoothing cofactor (the CSC method).Farin introduced the B-net method which is suitable for studying the multivariate spline over simplex partitions.This paper indicates that the smoothness conditions obtained in terms of the B-net method can be derived by the CSC method for the spline spaces over simplex partitions,and the CSC method is more capable in some sense than the B-net method in studying the multivariate spline.
Construction of local integro quintic splines
Directory of Open Access Journals (Sweden)
T. Zhanlav
2016-06-01
Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.
quadratic spline finite element method
Directory of Open Access Journals (Sweden)
A. R. Bahadir
2002-01-01
Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.
Directory of Open Access Journals (Sweden)
Are eLosnegård
2013-07-01
Full Text Available Image-based tractography of white matter (WM fiber bundles in the brain using diffusion weighted MRI (DW-MRI has become a useful tool in basic and clinical neuroscience. However, proper tracking is challenging due to the anatomical complexity of fiber pathways, the coarse resolution of clinically applicable whole-brain in vivo imaging techniques, and the difficulties associated with verification. In this study we introduce a new tractography algorithm using splines (denoted Spline. Spline reconstructs smooth fiber trajectories iteratively, in contrast to most other tractography algorithms that create piecewise linear fiber tract segments, followed by spline fitting. Using DW-MRI recordings from eight healthy elderly people participating in a longitudinal study of cognitive aging, we compare our Spline algorithm to two state-of-the-art tracking methods from the TrackVis software suite. The comparison is done quantitatively using diffusion metrics (fractional anisotropy, FA, with both (i tract averaging, (ii longitudinal linear mixed-effects model fitting, and (iii detailed along-tract analysis. Further validation is done on recordings from a diffusion hardware phantom, mimicking a coronal brain slice, with a known ground truth. Results from the longitudinal aging study showed high sensitivity of Spline tracking to individual aging patterns of mean FA when combined with linear mixed-effects modelling, moderately strong differences in the along-tract analysis of specific tracts, whereas the tract-averaged comparison using simple linear OLS regression revealed less differences between Spline and the two other tractography algorithms. In the brain phantom experiments with a ground truth, we demonstrated improved tracking ability of Spline compared to the two reference tractography algorithms being tested.
Inference in dynamic systems using B-splines and quasilinearized ODE penalties.
Frasso, Gianluca; Jaeger, Jonathan; Lambert, Philippe
2016-05-01
Nonlinear (systems of) ordinary differential equations (ODEs) are common tools in the analysis of complex one-dimensional dynamic systems. We propose a smoothing approach regularized by a quasilinearized ODE-based penalty. Within the quasilinearized spline-based framework, the estimation reduces to a conditionally linear problem for the optimization of the spline coefficients. Furthermore, standard ODE compliance parameter(s) selection criteria are applicable. We evaluate the performances of the proposed strategy through simulated and real data examples. Simulation studies suggest that the proposed procedure ensures more accurate estimates than standard nonlinear least squares approaches when the state (initial and/or boundary) conditions are not known. PMID:26602190
Uniform trigonometric polynomial B-spline curves
Institute of Scientific and Technical Information of China (English)
吕勇刚; 汪国昭; 杨勋年
2002-01-01
This paper presents a new kind of uniform spline curve, named trigonometric polynomialB-splines, over space Ω = span{sint, cost, tk-3,tk-4,…,t,1} of which k is an arbitrary integerlarger than or equal to 3. We show that trigonometric polynomial B-spline curves have many similarV properties to traditional B-splines. Based on the explicit representation of the curve we have also presented the subdivision formulae for this new kind of curve. Since the new spline can include both polynomial curves and trigonometric curves as special cases without rational form, it can be used as an efficient new model for geometric design in the fields of CAD/CAM.
Trajectory control of an articulated robot with a parallel drive arm based on splines under tension
Yi, Seung-Jong
Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and
Institute of Scientific and Technical Information of China (English)
Juan Chen; Chong-Jun Li; Wan-Ji Chen
2011-01-01
In this paper,a 13-node pyramid spline element is derived by using the tetrahedron volume coordinates and the B-net method,which achieves the second order completeness in Cartesian coordinates.Some appropriate examples were employed to evaluate the performance of the proposed element.The numerical results show that the spline element has much better performance compared with the isoparametric serendipity element Q20 and its degenerate pyramid element P13 especially when mesh is distorted,and it is comparable to the Lagrange element Q27.It has been demonstrated that the spline finite element method is an efficient tool for developing high accuracy elements.
Optimization of straight-sided spline design
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2011-01-01
and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using different design modifications, that do not change the spline load carrying capacity, it is shown...... that large reductions in the maximum stress are possible. Fatigue life of a spline can be greatly improved with up to a 25% reduction in the maximum stress level. Design modifications are given as simple analytical functions (modified super elliptical shape) with only two active design parameters...
Fuzzy B-Spline Surface Modeling
Directory of Open Access Journals (Sweden)
Rozaimi Zakaria
2014-01-01
Full Text Available This paper discusses the construction of a fuzzy B-spline surface model. The construction of this model is based on fuzzy set theory which is based on fuzzy number and fuzzy relation concepts. The proposed theories and concepts define the uncertainty data sets which represent fuzzy data/control points allowing the uncertainties data points modeling which can be visualized and analyzed. The fuzzification and defuzzification processes were also defined in detail in order to obtain the fuzzy B-spline surface crisp model. Final section shows an application of fuzzy B-spline surface modeling for terrain modeling which shows its usability in handling uncertain data.
Positivity Preserving Interpolation Using Rational Bicubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2015-01-01
Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.
Ryu, Duchwan
2010-09-28
We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.
Fretting damage parameters in splined couplings
Cuffaro, Vincenzo; Mura, Andrea; Cura', Francesca Maria
2013-01-01
This work focuses on the analysis of the debris found in the lubrication oil produced by the wear abrasion during wear tests conducted on crowned splined couplings. During each test the presence and the dimensions of the debris in the oil have been monitored. Tests have been performed by means of a dedicated splined couplings test rig and they have been performed by imposing an angular misalignment on the axes of the components. Results shows that when these components work in misaligned cond...
P-Splines Using Derivative Information
Calderon, Christopher P.
2010-01-01
Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.
Bayesian modeling using WinBUGS
Ntzoufras, Ioannis
2009-01-01
A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...
Non-Rigid Image Registration Algorithm Based on B-Splines Approximation
Institute of Scientific and Technical Information of China (English)
ZHANG Hongying; ZHANG Jiawan; SUN Jizhou; SUN Yigang
2007-01-01
An intensity-based non-rigid registration algorithm is discussed, which uses Gaussian smoothing to constrain the transformation to be smooth, and thus preserves the topology of images. In view of the insufficiency of the uniform Gaussian filtering of the deformation field, an automatic and accurate non-rigid image registration method based on B-splines approximation is proposed. The regularization strategy is adopted by using multi-level B-splines approximation to regularize the dis-placement fields in a coarse-to-fine manner. Moreover, it assigns the different weights to the estimated displacements according to their reliabilities. In this way, the level of regularity can be adapted locally. Experiments were performed on both synthetic and real medical images of brain, and the results show that the proposed method improves the registration accuracy and robustness.
Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2008-05-15
An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.
Timo Teuber
2013-01-01
The two-dimensional circular structure model by Kauermann, Teuber, and Flaschel (2011) will be extended to estimate more than two time series simultaneously. It will be assumed that the multivariate time series follow a cycle over the time. However, the radius and the angle are allowed to smoothly change over the time and will be estimated using a Penalized Spline Regression Technique. The model will be put to life using the Leading, Coincident and Lagging Indicators provided by the Conferenc...
MATLAB programs for smoothing X-ray spectra
International Nuclear Information System (INIS)
Two MATLAB 4.0 programs for smoothing X-ray spectra: wekskl.m - using polynomial regression splines and wekfft.m - using the fast Fourier transform are presented. The wekskl.m accomplishes smoothing for optimal distances between the knots, whereas the wekff.m uses an optimal spectral window width. The smoothed spectra are available in the form of vectors and are presented in a graphical form as well. (author)
Knot Insertion Algorithms for ECT B-spline Curves
Institute of Scientific and Technical Information of China (English)
SONG Huan-huan; TANG Yue-hong; LI Yu-juan
2013-01-01
Knot insertion algorithm is one of the most important technologies of B-spline method. By inserting a knot the local prop-erties of B-spline curve and the control flexibility of its shape can be further improved, also the segmentation of the curve can be re-alized. ECT spline curve is drew by the multi-knots spline curve with associated matrix in ECT spline space;Muehlbach G and Tang Y and many others have deduced the existence and uniqueness of the ECT spline function and developed many of its important properties .This paper mainly focuses on the knot insertion algorithm of ECT B-spline curve.It is the widest popularization of B-spline Behm algorithm and theory. Inspired by the Behm algorithm, in the ECT spline space, structure of generalized Pólya poly-nomials and generalized de Boor Fix dual functional, expressing new control points which are inserted after the knot by linear com-bination of original control vertex the single knot, and there are two cases, one is the single knot, the other is the double knot. Then finally comes the insertion algorithm of ECT spline curve knot. By application of the knot insertion algorithm, this paper also gives out the knot insertion algorithm of four order geometric continuous piecewise polynomial B-spline and algebraic trigonometric spline B-spline, which is consistent with previous results.
Scripted Bodies and Spline Driven Animation
DEFF Research Database (Denmark)
Erleben, Kenny; Henriksen, Knud
2002-01-01
In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain of the sp......In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...
Determinants of Low Birth Weight in Malawi: Bayesian Geo-Additive Modelling.
Directory of Open Access Journals (Sweden)
Alfred Ngwira
Full Text Available Studies on factors of low birth weight in Malawi have neglected the flexible approach of using smooth functions for some covariates in models. Such flexible approach reveals detailed relationship of covariates with the response. The study aimed at investigating risk factors of low birth weight in Malawi by assuming a flexible approach for continuous covariates and geographical random effect. A Bayesian geo-additive model for birth weight in kilograms and size of the child at birth (less than average or average and higher with district as a spatial effect using the 2010 Malawi demographic and health survey data was adopted. A Gaussian model for birth weight in kilograms and a binary logistic model for the binary outcome (size of child at birth were fitted. Continuous covariates were modelled by the penalized (p splines and spatial effects were smoothed by the two dimensional p-spline. The study found that child birth order, mother weight and height are significant predictors of birth weight. Secondary education for mother, birth order categories 2-3 and 4-5, wealth index of richer family and mother height were significant predictors of child size at birth. The area associated with low birth weight was Chitipa and areas with increased risk to less than average size at birth were Chitipa and Mchinji. The study found support for the flexible modelling of some covariates that clearly have nonlinear influences. Nevertheless there is no strong support for inclusion of geographical spatial analysis. The spatial patterns though point to the influence of omitted variables with some spatial structure or possibly epidemiological processes that account for this spatial structure and the maps generated could be used for targeting development efforts at a glance.
International Nuclear Information System (INIS)
A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background
A spline-regularized minimal residual algorithm for iterative attenuation correction in SPECT
International Nuclear Information System (INIS)
In SPECT, regularization is necessary to avoid divergence of the iterative algorithms used for non-uniform attenuation compensation. In this paper, we propose a spline-based regularization method for the minimal residual algorithm. First, the acquisition noise is filtered using a statistical model involving spline smoothing so that the filtered projections belong to a Sobolev space with specific continuity and derivability properties. Then, during the iterative reconstruction procedure, the continuity of the inverse Radon transform between Sobolev spaces is used to design a spline-regularized filtered backprojection method, by which the known regularity properties of the projections determine those of the corresponding reconstructed slices. This ensures that the activity distributions estimated at each iteration present regularity properties, which avoids computational noise amplification, thus stabilizing the iterative process. Analytical and Monte Carlo simulations are used to show that the proposed spline-regularized minimal residual algorithm converges to a satisfactory stable solution in terms of restored activity and homogeneity, using at most 25 iterations, whereas the non regularized version of the algorithm diverges. Choosing the number of iterations is therefore no longer a critical issue for this reconstruction procedure. (author)
REAL ROOT ISOLATION OF SPLINE FUNCTIONS
Institute of Scientific and Technical Information of China (English)
Renhong Wang; Jinming Wu
2008-01-01
In this paper,we propose an algorithm for isolating real roots of a given univariate spline function,which is based on the use of Descartes' rule of signs and de Casteljau algorithm.Numerical examples illustrate the flexibility and effectiveness of the algorithm.
FORMATION OF SHAFT SPLINES USING ROLLING METHOD
Directory of Open Access Journals (Sweden)
M. Sidorenko
2012-01-01
Full Text Available The paper describes design of rolling heads used for cold rolling of straight-sided splines on shafts and presents theoretical principles of this process. These principles make it possible to calculate an effort which is required for pushing billet through rolling-on rolls with due account of metal hardening during deformation.
Lesaffre, Emmanuel
2012-01-01
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd
A Geometric Approach for Multi-Degree Spline
Institute of Scientific and Technical Information of China (English)
Xin Li; Zhang-Jin Huang; Zhao Liu
2012-01-01
Multi-degree spline (MD-spline for short) is a generalization of B-spline which comprises of polynomial segments of various degrees.The present paper provides a new definition for MD-spline curves in a geometric intuitive way based on an efficient and simple evaluation algorithm.MD-spline curves maintain various desirable properties of B-spline curves,such as convex hull,local support and variation diminishing properties.They can also be refined exactly with knot insertion.The continuity between two adjacent segments with different degrees is at least C1 and that between two adjacent segments of same degrees d is Cd-1.Benefited by the exact refinement algorithm,we also provide several operators for MD-spline curves,such as converting each curve segment into Bézier form,an efficient merging algorithm and a new curve subdivision scheme which allows different degrees for each segment.
Geometry Modeling of Ship Hull Based on Non-uniform B-spline
Institute of Scientific and Technical Information of China (English)
WANG Hu; ZOU Zao-jian
2008-01-01
In order to generate the three-dimensional (3-D) hull surface accurately and smoothly, a mixed method which is made up of non-uniform B-spline together with an iterative procedure was developed. By using the iterative method the data points on each section curve are calculated and the generalized waterlines and transverse section curves are determined. Then using the non-uniform B-spline expression, the control vertex net of the hull is calculated based on the generalized waterlines and section curves. A ship with tunnel stern was taken as test case. The numerical results prove that the proposed approach for geometry modeling of 3-D ship hull surface is accurate and effective.
Left ventricular motion reconstruction with a prolate spheroidal B-spline model
Energy Technology Data Exchange (ETDEWEB)
Li Jin; Denney, Thomas S Jr [Electrical and Computer Engineering Department, 200 Broun Hall, Auburn University, AL 36849-5201 (United States)
2006-02-07
Tagged cardiac magnetic resonance (MR) imaging can non-invasively image deformation of the left ventricular (LV) wall. Three-dimensional (3D) analysis of tag data requires fitting a deformation model to tag lines in the image data. In this paper, we present a 3D myocardial displacement and strain reconstruction method based on a B-spline deformation model defined in prolate spheroidal coordinates, which more closely matches the shape of the LV wall than existing Cartesian or cylindrical coordinate models. The prolate spheroidal B-spline (PSB) deformation model also enforces smoothness across and can compute strain at the apex. The PSB reconstruction algorithm was evaluated on a previously published data set to allow head-to-head comparison of the PSB model with existing LV deformation reconstruction methods. We conclude that the PSB method can accurately reconstruct deformation and strain in the LV wall from tagged MR images and has several advantages relative to existing techniques.
Left ventricular motion reconstruction with a prolate spheroidal B-spline model
Li, Jin; Denney, Thomas S., Jr.
2006-02-01
Tagged cardiac magnetic resonance (MR) imaging can non-invasively image deformation of the left ventricular (LV) wall. Three-dimensional (3D) analysis of tag data requires fitting a deformation model to tag lines in the image data. In this paper, we present a 3D myocardial displacement and strain reconstruction method based on a B-spline deformation model defined in prolate spheroidal coordinates, which more closely matches the shape of the LV wall than existing Cartesian or cylindrical coordinate models. The prolate spheroidal B-spline (PSB) deformation model also enforces smoothness across and can compute strain at the apex. The PSB reconstruction algorithm was evaluated on a previously published data set to allow head-to-head comparison of the PSB model with existing LV deformation reconstruction methods. We conclude that the PSB method can accurately reconstruct deformation and strain in the LV wall from tagged MR images and has several advantages relative to existing techniques.
B-spline parameterization of spatial response in a monolithic scintillation camera
Solovov, V; Chepel, V; Domingos, V; Martins, R
2016-01-01
A framework for parameterization of the light response functions (LRFs) in a scintillation camera was developed. It is based on approximation of the measured or simulated photosensor response with weighted sums of uniform cubic B-splines or their tensor products. The LRFs represented in this way are smooth, computationally inexpensive to evaluate and require much less memory than non-parametric alternatives. The parameters are found in a straightforward way by the linear least squares method. The use of linear fit makes the fitting process stable and predictable enough to be used in non-supervised mode. Several techniques that allow to reduce the storage and processing power requirements were developed. A software library for fitting simulated and measured light response with spline functions was developed and integrated into an open source software package ANTS2 designed for simulation and data processing for Anger camera-type detectors.
B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation
Directory of Open Access Journals (Sweden)
Frederic Precioso
2002-06-01
Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.
Draper, D.
2001-01-01
© 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography
DIRECT MANIPULATION OF B-SPLINE SURFACES
Institute of Scientific and Technical Information of China (English)
Wang Zhiguo; Zhou Laishui; Wang Xiaoping
2005-01-01
Engineering design and geometric modeling often require the ability to modify the shape of parametric curves and surfaces so that their shape satisfy some given geometric constraints, including point, normal vector, curve and surface. Two approaches are presented to directly manipulate the shape of B-spline surface. The former is based on the least-square, whereas the latter is based on minimizing the bending energy of surface. For each method, since unified and explicit formulae are derived to compute new control points of modified surface, these methods are simple, fast and applicable for CAD systems. Algebraic technique is used to simplify the computation of B-spline composition and multiplication. Comparisons and examples are also given.
COMPACT SUPPORT THIN PLATE SPLINE ALGORITHM
Institute of Scientific and Technical Information of China (English)
Li Jing; Yang Xuan; Yu Jianping
2007-01-01
Common tools based on landmarks in medical image elastic registration are Thin Plate Spline (TPS) and Compact Support Radial Basis Function (CSRBF). TPS forces the corresponding landmarks to exactly match each other and minimizes the bending energy of the whole image. However,in real application, such scheme would deform the image globally when deformation is only local.CSRBF needs manually determine the support size, although its deformation is limited local. Therefore,to limit the effect of the deformation, new Compact Support Thin Plate Spline algorithm (CSTPS) is approached, analyzed and applied. Such new approach gains optimal mutual information, which shows its registration result satisfactory. The experiments also show it can apply in both local and global elastic registration.
On spline approximation of sliced inverse regression
Institute of Scientific and Technical Information of China (English)
Li-ping ZHU; Zhou YU
2007-01-01
The dimension reduction is helpful and often necessary in exploring the nonparametric regression structure. In this area, Sliced inverse regression (SIR) is a promising tool to estimate the central dimension reduction (CDR) space. To estimate the kernel matrix of the SIR, we herein suggest the spline approximation using the least squares regression. The heteroscedasticity can be incorporated well by introducing an appropriate weight function. The root-n asymptotic normality can be achieved for a wide range choice of knots. This is essentially analogous to the kernel estimation. Moreover,we also propose a modified Bayes information criterion (BIC) based on the eigenvalues of the SIR matrix. This modified BIC can be applied to any form of the SIR and other related methods. The methodology and some of the practical issues are illustrated through the horse mussel data. Empirical studies evidence the performance of our proposed spline approximation by comparison of the existing estimators.
Interaction Spline Models and Their Convergence Rates
Chen, Zehua
1991-01-01
We consider interaction splines which model a multivariate regression function $f$ as a constant plus the sum of functions of one variable (main effects), plus the sum of functions of two variables (two-factor interactions), and so on. The estimation of $f$ by the penalized least squares method and the asymptotic properties of the models are studied in this article. It is shown that, under some regularity conditions on the data points, the expected squared error averaged over the data points ...
Marginal longitudinal semiparametric regression via penalized splines
Al Kadiri, M.
2010-08-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
Bernuau spline wavelets and Sturmian sequences
Andrle, Miroslav; Burdik, Cestmir; Gazeau, Jean-Pierre
2003-01-01
A spline wavelets construction of class C^n(R) supported by sequences of aperiodic discretizations of R is presented. The construction is based on multiresolution analysis recently elaborated by G. Bernuau. At a given scale, we consider discretizations that are sets of left-hand ends of tiles in a self-similar tiling of the real line with finite local complexity. Corresponding tilings are determined by two-letter Sturmian substitution sequences. We illustrate the construction with examples ha...
Marginal longitudinal semiparametric regression via penalized splines.
Kadiri, M Al; Carroll, R J; Wand, M P
2010-08-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
Bayesian spatial semi-parametric modeling of HIV variation in Kenya.
Directory of Open Access Journals (Sweden)
Oscar Ngesa
Full Text Available Spatial statistics has seen rapid application in many fields, especially epidemiology and public health. Many studies, nonetheless, make limited use of the geographical location information and also usually assume that the covariates, which are related to the response variable, have linear effects. We develop a Bayesian semi-parametric regression model for HIV prevalence data. Model estimation and inference is based on fully Bayesian approach via Markov Chain Monte Carlo (McMC. The model is applied to HIV prevalence data among men in Kenya, derived from the Kenya AIDS indicator survey, with n = 3,662. Past studies have concluded that HIV infection has a nonlinear association with age. In this study a smooth function based on penalized regression splines is used to estimate this nonlinear effect. Other covariates were assumed to have a linear effect. Spatial references to the counties were modeled as both structured and unstructured spatial effects. We observe that circumcision reduces the risk of HIV infection. The results also indicate that men in the urban areas were more likely to be infected by HIV as compared to their rural counterpart. Men with higher education had the lowest risk of HIV infection. A nonlinear relationship between HIV infection and age was established. Risk of HIV infection increases with age up to the age of 40 then declines with increase in age. Men who had STI in the last 12 months were more likely to be infected with HIV. Also men who had ever used a condom were found to have higher likelihood to be infected by HIV. A significant spatial variation of HIV infection in Kenya was also established. The study shows the practicality and flexibility of Bayesian semi-parametric regression model in analyzing epidemiological data.
Sinha, Rajnikant
2014-01-01
This book offers an introduction to the theory of smooth manifolds, helping students to familiarize themselves with the tools they will need for mathematical research on smooth manifolds and differential geometry. The book primarily focuses on topics concerning differential manifolds, tangent spaces, multivariable differential calculus, topological properties of smooth manifolds, embedded submanifolds, Sard’s theorem and Whitney embedding theorem. It is clearly structured, amply illustrated and includes solved examples for all concepts discussed. Several difficult theorems have been broken into many lemmas and notes (equivalent to sub-lemmas) to enhance the readability of the book. Further, once a concept has been introduced, it reoccurs throughout the book to ensure comprehension. Rank theorem, a vital aspect of smooth manifolds theory, occurs in many manifestations, including rank theorem for Euclidean space and global rank theorem. Though primarily intended for graduate students of mathematics, the book ...
Scalable low-complexity B-spline discretewavelet transform architecture
Martina, Maurizio; Masera, Guido; Piccinini, Gianluca
2010-01-01
A scalable discrete wavelet transform architecture based on the B-spline factorisation is presented. In particular, it is shown that several wavelet filters of practical interest have a common structure in the distributed part of their B-spline factorisation. This common structure is effectively exploited to achieve scalability and to save multipliers compared with a direct polyphase B-spline implementation. Since the proposed solution is more robust to coefficient quantisation than direct po...
An Areal Isotropic Spline Filter for Surface Metrology
Zhang, Hao; Tong, Mingsi; Chu, Wei
2015-01-01
This paper deals with the application of the spline filter as an areal filter for surface metrology. A profile (2D) filter is often applied in orthogonal directions to yield an areal filter for a three-dimensional (3D) measurement. Unlike the Gaussian filter, the spline filter presents an anisotropic characteristic when used as an areal filter. This disadvantage hampers the wide application of spline filters for evaluation and analysis of areal surface topography. An approximation method is p...
Pseudo-cubic thin-plate type Spline method for analyzing experimental data
International Nuclear Information System (INIS)
A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs
Faired MISO B-Spline Fuzzy Systems and Its Applications
Directory of Open Access Journals (Sweden)
Tan Yanhua
2013-01-01
Full Text Available We construct two classes of faired MISO B-spline fuzzy systems using the fairing method in computer-aided geometric design (CAGD for reducing adverse effects of the inexact data. Towards this goal, we generalize the faring method to high-dimension cases so that the faring method only for SISO and DISO B-spline fuzzy systems is extended to fair the MISO ones. Then the problem to construct a faired MISO B-spline fuzzy systems is transformed into solving an optimization problem with a strictly convex quadratic objective function and the unique optimal solution vector is taken as linear combination coefficients of the basis functions for a certain B-spline fuzzy system to obtain a faired MISO B-spline fuzzy system. Furthermore, we design variable universe adaptive fuzzy controllers by B-spline fuzzy systems and faired B-spline fuzzy systems to stabilize the double inverted pendulum. The simulation results show that the controllers by faired B-spline fuzzy systems perform better than those by B-spline fuzzy systems, especially when the data for fuzzy systems are inexact.
Application of spline wavelet transform in differential of electroanalytical signal
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Investigating characteristics of spline wavelet, we found that if the two-order spline function, the derivative function of the three-order B spline function, is used as the wavelet base function, the spline wavelet transform has both the property of denoising and that of differential. As a result, the relation between the spline wavelet transform and the differential was studied in theory. Experimental results show that the spline wavelet transform can well be applied to the differential of the electroanalytical signal. Compared with other kinds of wavelet transform, the spline wavelet trans-form has a characteristic of differential. Compared with the digital differential and simulative differential with electronic circuit, the spline wavelet transform not only can carry out denoising and differential for a signal, but also has the ad-vantages of simple operation and small quantity of calcula-tion, because step length, RC constant and other kinds of parameters need not be selected. Compared with Alexander Kai-man Leung's differential method, the differential method with spline wavelet transform has the characteristic that the differential order is not dependent on the number of data points in the original signal.
Nielsen, J D; Dean, C B
2008-09-01
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
Bayesian Kernel Mixtures for Counts
Canale, Antonio; David B Dunson
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviatio...
Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines
Bartoň, Michael
2015-10-24
We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.
Survival Analysis with Multivariate adaptive Regression Splines
Kriner, Monika
2007-01-01
Multivariate adaptive regression splines (MARS) are a useful tool to identify linear and nonlinear eﬀects and interactions between two covariates. In this dissertation a new proposal to model survival type data with MARS is introduced. Martingale and deviance residuals of a Cox PH model are used as response in a common MARS approach to model functional forms of covariate eﬀects as well as possible interactions in a data-driven way. Simulation studies prove that the new method yields a bett...
Campanelli, L
2016-01-01
In the Ratra scenario of inflationary magnetogenesis, the kinematic coupling between the photon and the inflaton undergoes a nonanalytical jump at the end of inflation. Using smooth interpolating analytical forms of the coupling function, we show that such unphysical jump does not invalidate the main prediction of the model, which still represents a viable mechanism for explaining cosmic magnetization. Nevertheless, there is a spurious result associated with the nonanaliticity of the coupling, to wit, the prediction that the spectrum of created photons has a power-law decay in the ultraviolet regime. This issue is discussed using both semiclassical approximation and smooth coupling functions.
On spline approximation of sliced inverse regression
Institute of Scientific and Technical Information of China (English)
2007-01-01
The dimension reduction is helpful and often necessary in exploring the nonparametric regression structure.In this area,Sliced inverse regression (SIR) is a promising tool to estimate the central dimension reduction (CDR) space.To estimate the kernel matrix of the SIR,we herein suggest the spline approximation using the least squares regression.The heteroscedasticity can be incorporated well by introducing an appropriate weight function.The root-n asymptotic normality can be achieved for a wide range choice of knots.This is essentially analogous to the kernel estimation.Moreover, we also propose a modified Bayes information criterion (BIC) based on the eigenvalues of the SIR matrix.This modified BIC can be applied to any form of the SIR and other related methods.The methodology and some of the practical issues are illustrated through the horse mussel data.Empirical studies evidence the performance of our proposed spline approximation by comparison of the existing estimators.
Cylindrical Helix Spline Approximation of Spatial Curves
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, we present a new method for approximating spatial curves with a G1 cylindrical helix spline within a prescribed tolerance. We deduce the general formulation of a cylindrical helix,which has 11 freedoms. This means that it needs 11 restrictions to determine a cylindrical helix. Given a spatial parametric curve segment, including the start point and the end point of this segment, the tangent and the principal normal of the start point, we can always find a cylindrical segment to interpolate the given direction and position vectors. In order to approximate the known parametric curve within the prescribed tolerance, we adopt the trial method step by step. First, we must ensure the helix segment to interpolate the given two end points and match the principal normal and tangent of the start point, and then, we can keep the deviation between the cylindrical helix segment and the known curve segment within the prescribed tolerance everywhere. After the first segment had been formed, we can construct the next segment. Circularly, we can construct the G1 cylindrical helix spline to approximate the whole spatial parametric curve within the prescribed tolerance. Several examples are also given to show the efficiency of this method.
Wavefront reconstruction in adaptive optics systems using nonlinear multivariate splines
De Visser, C.C.; Verhaegen, M.H.G.
2012-01-01
This paper presents a new method for zonal wavefront reconstruction (WFR) with application to adaptive optics systems. This new method, indicated as Spline based ABerration REconstruction (SABRE), uses bivariate simplex B-spline basis functions to reconstruct the wavefront using local wavefront slop
Exponential B-splines and the partition of unity property
DEFF Research Database (Denmark)
Christensen, Ole; Massopust, Peter
2012-01-01
We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...
Trigonometric polynomial B-spline with shape parameter
Institute of Scientific and Technical Information of China (English)
WANG Wentao; WANG Guozhao
2004-01-01
The basis function of n order trigonometric polynomial B-spline with shape parameter is constructed by an integral approach. The shape of the constructed curve can be adjusted by changing the shape parameter and it has most of the properties of B-spline. The ellipse and circle can be accurately represented by this basis function.
Nonlinear and fault-tolerant flight control using multivariate splines
Tol, H.J.; De Visser, C.C.; Van Kampen, E.J.; Chu, Q.P.
2015-01-01
This paper presents a study on fault tolerant flight control of a high performance aircraft using multivariate splines. The controller is implemented by making use of spline model based adaptive nonlinear dynamic inversion (NDI). This method, indicated as SANDI, combines NDI control with nonlinear c
A family of quasi-cubic blended splines and applications
Institute of Scientific and Technical Information of China (English)
SU Ben-yue; TAN Jie-qing
2006-01-01
A class of quasi-cubic B-spline base functions by trigonometric polynomials are established which inherit properties similar to those of cubic B-spline bases. The corresponding curves with a shape parameter α, defined by the introduced base functions, include the B-spline curves and can approximate the B-spline curves from both sides. The curves can be adjusted easily by using the shape parameter α, where dpi(α,t) is linear with respect to dα for the fixed t. With the shape parameter chosen properly,the defined curves can be used to precisely represent straight line segments, parabola segments, circular arcs and some transcendental curves, and the corresponding tensor product surfaces can also represent spherical surfaces, cylindrical surfaces and some transcendental surfaces exactly. By abandoning positive property, this paper proposes a new C2 continuous blended interpolation spline based on piecewise trigonometric polynomials associated with a sequence of local parameters. Illustration showed that the curves and surfaces constructed by the blended spline can be adjusted easily and freely. The blended interpolation spline curves can be shape-preserving with proper local parameters since these local parameters can be considered to be the magnification ratio to the length of tangent vectors at the interpolating points. The idea is extended to produce blended spline surfaces.
Positivity and Monotonicity Preserving Biquartic Rational Interpolation Spline Surface
Directory of Open Access Journals (Sweden)
Xinru Liu
2014-01-01
Full Text Available A biquartic rational interpolation spline surface over rectangular domain is constructed in this paper, which includes the classical bicubic Coons surface as a special case. Sufficient conditions for generating shape preserving interpolation splines for positive or monotonic surface data are deduced. The given numeric experiments show our method can deal with surface construction from positive or monotonic data effectively.
MOTION VELOCITY SMOOTH LINK IN HIGH SPEED MACHINING
Institute of Scientific and Technical Information of China (English)
REN Kun; FU Jianzhong; CHEN Zichen
2007-01-01
To deal with over-shooting and gouging in high speed machining, a novel approach for velocity smooth link is proposed. Considering discrete tool path, cubic spline curve fitting is used to find dangerous points, and according to spatial geometric properties of tool path and the kinematics theory, maximum optimal velocities at dangerous points are obtained. Based on method of velocity control characteristics stored in control system, a fast algorithm for velocity smooth link is analyzed and formulated. On-line implementation results show that the proposed approach makes velocity changing more smoothly compared with traditional velocity control methods and improves productivity greatly.
Variational splines on Riemannian manifolds with applications to integral geometry
Pesenson, Isaac
2011-01-01
We extend the classical theory of variational interpolating splines to the case of compact Riemannian manifolds. Our consideration includes in particular such problems as interpolation of a function by its values on a discrete set of points and interpolation by values of integrals over a family of submanifolds. The existence and uniqueness of interpolating variational spline on a Riemannian manifold is proven. Optimal properties of such splines are shown. The explicit formulas of variational splines in terms of the eigen functions of Laplace-Beltrami operator are found. It is also shown that in the case of interpolation on discrete sets of points variational splines converge to a function in $C^{k}$ norms on manifolds. Applications of these results to the hemispherical and Radon transforms on the unit sphere are given.
Testing for additivity with B-splines
Institute of Scientific and Technical Information of China (English)
2007-01-01
Regression splines are often used for fitting nonparametric functions, and they work especially well for additivity models. In this paper, we consider two simple tests of additivity: an adaptation of Tukey’s one degree of freedom test and a nonparametric version of Rao’s score test. While the Tukey-type test can detect most forms of the local non-additivity at the parametric rate of O(n-1/2), the score test is consistent for all alternative at a nonparametric rate. The asymptotic distribution of these test statistics is derived under both the null and local alternative hypotheses. A simulation study is conducted to compare their finite-sample performances with some existing kernel-based tests. The score test is found to have a good overall performance.
Testing for additivity with B-splines
Institute of Scientific and Technical Information of China (English)
Heng-jian CUI; Xu-ming HE; Li LIU
2007-01-01
Regression splines are often used for fitting nonparametric functions, and they work especially well for additivity models. In this paper, we consider two simple tests of additivity: an adaptation of Tukey's one degree of freedom test and a nonparametric version of Rao's score test. While the Tukey-type test can detect most forms of the local non-additivity at the parametric rate of O(n-1/2), the score test is consistent for all alternative at a nonparametric rate. The asymptotic distribution of these test statistics is derived under both the null and local alternative hypotheses. A simulation study is conducted to compare their finite-sample performances with some existing kernelbased tests. The score test is found to have a good overall performance.
Institute of Scientific and Technical Information of China (English)
Joong-Hyun Rhim; Doo-Yeoun Cho; Kyu-Yeul Lee; Tae-Wan Kim
2006-01-01
We propose a method that automatically generates discrete bicubic G1 continuous B-spline surfaces that interpolate the curve network of a ship hullform. First, the curves in the network are classified into two types: boundary curves and "reference curves". The boundary curves correspond to a set of rectangular (or triangular) topological type that can be represented with tensor-product (or degenerate) B-spline surface patches. Next, in the interior of the patches,surface fitting points and cross boundary derivatives are estimated from the reference curves by constructing "virtual" isoparametric curves. Finally, a discrete G1 continuous B-spline surface is generated by a surface fitting algorithm. Several smooth ship hullform surfaces generated from curve networks corresponding to actual ship hullforms demonstrate the quality of the method.
Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel
2013-01-01
Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean
Numerical simulation of involutes spline shaft in cold rolling forming
Institute of Scientific and Technical Information of China (English)
王志奎; 张庆
2008-01-01
Design of forming dies and whole process of simulation of cold rolling involutes spline can be realized by using of CAD software of PRO-E and CAE software of DEFORM-3D. Software DEFORM-3D provides an automatic and optimized remeshing function, especially for the large deformation. In order to use this function sufficiently, simulation of cold rolling involutes spline can be implemented indirectly. The relationship between die and workpiece, forming force and characteristic of deformation in the forming process of cold rolling involutes spline are analyzed and researched. Meanwhile, reliable proofs for the design of dies and deforming equipment are provided.
A New Local Control Spline with Shape Parameters for CAD／CAM
Institute of Scientific and Technical Information of China (English)
秦开怀; 孙家广
1993-01-01
Anew local control spline based on shape parameterw with G3 continuity,called BLC-spline,is pro* posed.Not only is BLC-spline very smoot,but also the spline curve's characteristic polygon has only three control vertices,and the characteristic polyhedron has only nine control vertices.The behavior of Iocal control of BLC-spline is better than that of the other splines such as cubic Bezier,B and Beta-spline.The three shape parameters β0，β1and β2 of BLC-spline,which are independent of the control vertices,may be altered to change the shape of the curve or surface.It is shown that BLC-spline may be used to construcet a space are spline for DNC machining directly.That is a powerful tool for the design and manufacture of curves and surfaces in integrated CAD/CAM systems.
Modeling terminal ballistics using blending-type spline surfaces
Pedersen, Aleksander; Bratlie, Jostein; Dalmo, Rune
2014-12-01
We explore using GERBS, a blending-type spline construction, to represent deform able thin-plates and model terminal ballistics. Strategies to construct geometry for different scenarios of terminal ballistics are proposed.
A note on linear B-spline copulas
Erdely, Arturo
2016-01-01
In this brief note we prove that linear B-spline copulas is not a new family of copulas since they are equivalent to checkerboard copulas, and discuss in particular how they are used to extend empirical subcopulas to copulas.
Estimating Financial Trends by Spline Fitting via Fisher Algoritm
BARAN, Mehmet; SÖNMEZER, Sıtkı; UÇAR, Abdulvahid
2015-01-01
Trends have a crucial role in finance such as setting investment strategies and technical analysis. Determining trend changes in an optimal way is the main aim of this study. The model of this study improves the optimality by spline fitting to the equations to reduce the error terms. The results show that spline fitting is more efficient compared to line fitting by % and Fisher Method by %. This method may be used to determine regime switches as well.
Representative fretting fatigue testing and prediction for splined couplings
Houghton, Dean
2009-01-01
Spline couplings are a compact and efficient means for transferring torque between shafts in gas turbine aeroengines. With competition in the aerospace market and the need to reduce fuel burn from the flight carriers, there is an ever-present requirement for enhanced performance. Spline couplings are complex components that can fail from a variety of mechanisms, and are susceptible to fretting wear and fretting fatigue (FF). Due to the expensive nature of full-scale testing, this thesis inves...
G1 Continuity Conditions of B－spline Surfaces
Institute of Scientific and Technical Information of China (English)
车翔玖; 梁学章
2002-01-01
According to the B-spline theory and Boehm algorithm,this paper presents several necessary and sufficient G1 continuity conditions betwwen two adjacent B-spline surfaces,In Order to meet the need of application,a kind of sufficient conditions of G1 continuity are developed,and a kind of sufficient conditions of G1 continuity among N(N>2) patch B-shline surfaces meeting at a common corner are given at the end.
The 'clamshell problem' - Interpolation of shaped reflectors and other smooth surfaces
Baker, Lynn A.
1988-11-01
An interpolation method for shaped reflector antennas and similar smooth surfaces is developed using cubic spline interpolants in a parametric representation. Ordinary one-dimensional splines are combined in tensor products to give a two-dimensional interpolant for each cartesian component of points of the surface. The two independent variables are the polar coordinates of rays traced from an aperture disk. This ray tracing maps a grid of lines from the aperture to the surface being interpolated and gives a general-purpose method for interpolating smooth surfaces. The polar coordinates are partitioned into uniform intervals, which simplifies the calculations. The interpolant is differentiated to provide partial derivatives of the surface coordinates, and these derivatives are combined to give surface normals and Jacobians. The bicubic spline is also integrated to give a general-purpose two-dimensional integration routine. The parametric form makes it easy to find a variety of cross sections, boundaries, inflection points, and other characteristics of the surface.
Stable Local Volatility Calibration Using Kernel Splines
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Railroad inspection based on ACFM employing a non-uniform B-spline approach
Chacón Muñoz, J. M.; García Márquez, F. P.; Papaelias, M.
2013-11-01
The stresses sustained by rails have increased in recent years due to the use of higher train speeds and heavier axle loads. For this reason surface and near-surface defects generate by Rolling Contact Fatigue (RCF) have become particularly significant as they can cause unexpected structural failure of the rail, resulting in severe derailments. The accident that took place in Hatfield, UK (2000), is an example of a derailment caused by the structural failure of a rail section due to RCF. Early detection of RCF rail defects is therefore of paramount importance to the rail industry. The performance of existing ultrasonic and magnetic flux leakage techniques in detecting rail surface-breaking defects, such as head checks and gauge corner cracking, is inadequate during high-speed inspection, while eddy current sensors suffer from lift-off effects. The results obtained through rail inspection experiments under simulated conditions using Alternating Current Field Measurement (ACFM) probes, suggest that this technique can be applied for the accurate and reliable detection of surface-breaking defects at high inspection speeds. This paper presents the B-Spline approach used for the accurate filtering the noise of the raw ACFM signal obtained during high speed tests to improve the reliability of the measurements. A non-uniform B-spline approximation is employed to calculate the exact positions and the dimensions of the defects. This method generates a smooth approximation similar to the ACFM dataset points related to the rail surface-breaking defect.
Introduction to Bayesian statistics
Bolstad, William M
2016-01-01
There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...
Bayesian artificial intelligence
Korb, Kevin B
2003-01-01
As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.
Bayesian artificial intelligence
Korb, Kevin B
2010-01-01
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente
An accurate spline polynomial cubature formula for double integration with logarithmic singularity
Bichi, Sirajo Lawan; Eshkuvatov, Z. K.; Long, N. M. A. Nik; Bello, M. Y.
2016-06-01
The paper studied the integration of logarithmic singularity problem J (y ¯)= ∬ ∇ζ (y ¯)l o g |y ¯-y¯0*|d A , where y ¯=(α ,β ), y¯0=(α0,β0) the domain ∇ is rectangle ∇ = [r1, r2] × [r3, r4], the arbitrary point y ¯∈∇ and the fixed point y¯0∈∇. The given density function ζ(y ¯), is smooth on the rectangular domain ∇ and is in the functions class C2,τ (∇). Cubature formula (CF) for double integration with logarithmic singularities (LS) on a rectangle ∇ is constructed by applying type (0, 2) modified spline function DΓ(P). The results obtained by testing the density functions ζ(y ¯) as linear and absolute value functions shows that the constructed CF is highly accurate.
Adaptive non-uniform B-spline dictionaries on a compact interval
Rebollo-Neira, Laura
2009-01-01
Non-uniform B-spline dictionaries on a compact interval are discussed. For each given partition, dictionaries of B-spline functions for the corresponding spline space are constructed. It is asserted that, by dividing the given partition into subpartitions and joining together the bases for the concomitant subspaces, slightly redundant dictionaries of B-splines functions are obtained. Such dictionaries are proved to span the spline space associated to the given partition. The proposed construction is shown to be potentially useful for the purpose of sparse signal representation. With that goal in mind, spline spaces specially adapted to produce a sparse representation of a given signal are considered.
Nonequilibrium flows with smooth particle applied mechanics
Energy Technology Data Exchange (ETDEWEB)
Kum, O.
1995-07-01
Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (u{rho}) and (T{rho}), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.
Fingerprint Representation Methods Based on B-Spline Functions
Institute of Scientific and Technical Information of China (English)
Ruan Ke; Xia De-lin; Yan Pu-liu
2004-01-01
The global characteristics of a fingerprint image such as the ridge shape and ridge topology are often ignored in most automatic fingerprint verification system. In this paper, a new representative method based on B-Spline curve is proposed to address this problem. The resultant B-Spline curves can represent the global characteristics completely and the curves are analyzable and precise. An algorithm is also proposed to extract the curves from the fingerprint image. In addition to preserve the most information of the fingerprint image, the knot-points number of the B-Spline curve is reduced to minimum in this algorithm. At the same time, the influence of the fingerprint image noise is discussed. In the end, an example is given to demonstrate the effectiveness of the representation method.
Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B
2013-01-01
FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear
Yuan, Ying; MacKinnon, David P.
2009-01-01
This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...
Bayesian Games with Intentions
Bjorndahl, Adam; Halpern, Joseph Y.; Pass, Rafael
2016-01-01
We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.
Image Compression and Reconstruction using Cubic Spline Interpolation Technique
Directory of Open Access Journals (Sweden)
R. Muthaiah
2008-01-01
Full Text Available A new dimension of image compression using random pixels of irregular sampling and image reconstruction using cubic-spline interpolation techniques proposed in this paper. It also covers the wide field of multimedia communication concerned with multimedia messaging (MMS and image transfer through mobile phones and tried to find a mechanism to transfer images with minimum bandwidth requirement. This method would provide a better efficiency both in pixel reconstruction and color reproduction. The discussion covers theoretical techniques of random pixel selection, transfer and implementation of efficient reconstruction with cubic spline interpolation.
Vibration Analysis of Beams by Spline Finite Element
Institute of Scientific and Technical Information of China (English)
YANG Hao; SUN Li
2011-01-01
In this paper,the spline finite element method is developed to investigate free vibration problems of beams.The cubic B-spline functions are used to construct the displacement field.The assembly of elements and the introduction of boundary conditions follow the standard finite element procedure.The results under various boundary conditions are compared with those obtained by the exact method and the finite difference method.It shows that the results are in excellent agreement with the analytical results and much more accurate than the results obtained by the finite difference method,especially for higher order modes.
Anacleto Junior, Osvaldo; Queen, Catriona; Albers, Casper
2013-01-01
Traffic flow data are routinely collected for many networks worldwide. These invariably large data sets can be used as part of a traffic management system, for which good traffic flow forecasting models are crucial. The linear multiregression dynamic model (LMDM) has been shown to be promising for forecasting flows, accommodating multivariate flow time series, while being a computationally simple model to use. While statistical flow forecasting models usually base their forecasts on flow da...
Approximating Spline filter: New Approach for Gaussian Filtering in Surface Metrology
Hao Zhang; Yibao Yuan
2009-01-01
This paper presents a new spline filter named approximating spline filter for surface metrology. The purpose is to provide a new approach of Gaussian filter and evaluate the characteristics of an engineering surface more accurately and comprehensively. First, the configuration of approximating spline filter is investigated, which describes that this filter inherits all the merits of an ordinary spline filter e.g. no phase distortion and no end distortion. Then, the approximating coefficient s...
Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System
Feng Chen; Zhansheng Liu; Guang Zhao
2008-01-01
Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases non...
DEFF Research Database (Denmark)
Jensen, Finn Verner; Nielsen, Thomas Dyhre
2016-01-01
Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...... and edges. The nodes represent variables, which may be either discrete or continuous. An edge between two nodes A and B indicates a direct influence between the state of A and the state of B, which in some domains can also be interpreted as a causal relation. The wide-spread use of Bayesian networks...... is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...
Kriging and thin plate splines for mapping climate variables
Boer, E.P.J.; Beurs, de K.M.; Hartkamp, A.D.
2001-01-01
Four forms of kriging and three forms of thin plate splines are discussed in this paper to predict monthly maximum temperature and monthly mean precipitation in Jalisco State of Mexico. Results show that techniques using elevation as additional information improve the prediction results considerably
Sparse image representation by discrete cosine/spline based dictionaries
Bowley, James
2009-01-01
Mixed dictionaries generated by cosine and B-spline functions are considered. It is shown that, by highly nonlinear approaches such as Orthogonal Matching Pursuit, the discrete version of the proposed dictionaries yields a significant gain in the sparsity of an image representation.
A matrix method for degree-raising of B-spline curves
Institute of Scientific and Technical Information of China (English)
秦开怀
1997-01-01
A new identity is proved that represents the kth order B-splines as linear combinations of the (k + 1) th order B-splines A new method for degree-raising of B-spline curves is presented based on the identity. The new method can be used for all kinds of B-spline curves, that is, both uniform and arbitrarily nonuniform B-spline curves. When used for degree-raising of a segment of a uniform B-spline curve of degree k - 1, it can help obtain a segment of curve of degree k that is still a uniform B-spline curve without raising the multiplicity of any knot. The method for degree-raising of Bezier curves can be regarded as the special case of the new method presented. Moreover, the conventional theory for degree-raising, whose shortcoming has been found, is discussed.
Understanding Computational Bayesian Statistics
Bolstad, William M
2011-01-01
A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic
Bayesian statistics an introduction
Lee, Peter M
2012-01-01
Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Prediction Method for the Surface Damage in Splined Couplings
Cuffaro, Vincenzo
2013-01-01
The primary purpose of my PhD thesis was to develop design criteria and to verify procedures about fretting wear, that are applicable to crowned spline couplings of a power transmission system of the aeroengines. Fretting is a very complex phenomenon being influenced by many factors, the most important being the presence or absence of lubrication, the load distribution (contact pressure) and the sliding between the bodies. Therefore, the study of fretting needs a deep knowledge of these three...
Adaptive Surface Reconstruction Based on Tensor Product Algebraic Splines
Institute of Scientific and Technical Information of China (English)
Xinghua Song; Falai Chen
2009-01-01
Surface reconstruction from unorganized data points is a challenging problem in Computer Aided Design and Geometric Modeling. In this paper, we extend the mathematical model proposed by Juttler and Felis (Adv. Comput. Math., 17 (2002), pp. 135-152) based on tensor product algebraic spline surfaces from fixed meshes to adaptive meshes. We start with a tensor product algebraic B-spline surface defined on an initial mesh to fit the given data based on an optimization approach. By measuring the fitting errors over each cell of the mesh, we recursively insert new knots in cells over which the errors are larger than some given threshold, and construct a new algebraic spline surface to better fit the given data locally. The algorithm terminates when the error over each cell is less than the threshold. We provide some examples to demonstrate our algorithm and compare it with Jiittler's method. Examples suggest that our method is effective and is able to produce reconstruction surfaces of high quality.AMS subject classifications: 65D17
Frühwirth-Schnatter, Sylvia
1990-01-01
In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)
Yuan, Ying; MacKinnon, David P.
2009-01-01
In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…
USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES
Directory of Open Access Journals (Sweden)
Otgon Cristian
2011-07-01
Full Text Available The paper aims to approach innovative financial instruments for the management of public resources. In the category of these innovative tools have been included polynomial spline functions used for budgetary sizing in the substantiating of fiscal and budgetary policies. In order to use polynomial spline functions there have been made a number of steps consisted in the establishment of nodes, the calculation of specific coefficients corresponding to the spline functions, development and determination of errors of approximation. Also in this paper was done extrapolation of series of property tax data using polynomial spline functions of order I. For spline impelementation were taken two series of data, one reffering to property tax as a resultative variable and the second one reffering to building tax, resulting a correlation indicator R=0,95. Moreover the calculation of spline functions are easy to solve and due to small errors of approximation have a great power of predictibility, much better than using ordinary least squares method. In order to realise the research there have been used as methods of research several steps, namely observation, series of data construction and processing the data with spline functions. The data construction is a daily series gathered from the budget account, reffering to building tax and property tax. The added value of this paper is given by the possibility of avoiding deficits by using spline functions as innovative instruments in the publlic finance, the original contribution is made by the average of splines resulted from the series of data. The research results lead to conclusion that the polynomial spline functions are recommended to form the elaboration of fiscal and budgetary policies, due to relatively small errors obtained in the extrapolation of economic processes and phenomena. Future research directions are taking in consideration to study the polynomial spline functions of second-order, third
Smoothing methods in biometry: a historic review
Directory of Open Access Journals (Sweden)
Schimek, Michael G.
2005-06-01
Full Text Available In Germany around 25 years ago nonparametric smoothing methods have found their way into statistics and with some delay also into biometry. In the early 1980's there has been what one might call a boom in theoretical and soon after also in computational statistics. The focus was on univariate nonparametric methods for density and curve estimation. For biometry however smoothing methods became really interesting in their multivariate version. This 'change of dimensionality' is still raising open methodological questions. No wonder that the simplifying paradigm of additive regression, realized in the generalized additive models (GAM, has initiated the success story of smoothing techniques starting in the early 1990's. In parallel there have been new algorithms and important software developments, primarily in the statistical programming languages S and R. Recent developments of smoothing techniques can be found in survival analysis, longitudinal analysis, mixed models and functional data analysis, partly integrating Bayesian concepts. All new are smoothing related statistical methods in bioinformatics. In this article we aim not only at a general historical overview but also try to sketch activities in the German-speaking world. Moreover, the current situation is critically examined. Finally a large number of relevant references is given.
Côrtes, A.M.A.
2016-10-01
The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence-free. When applied to the discretized Stokes problem, these spaces generate a symmetric and indefinite saddle-point linear system. The iterative method of choice to solve such system is the Generalized Minimum Residual Method. This method lacks robustness, and one remedy is to use preconditioners. For linear systems of saddle-point type, a large family of preconditioners can be obtained by using a block factorization of the system. In this paper, we show how the nesting of “black-box” solvers and preconditioners can be put together in a block triangular strategy to build a scalable block preconditioner for the Stokes system discretized by divergence-conforming B-splines. Besides the known cavity flow problem, we used for benchmark flows defined on complex geometries: an eccentric annulus and hollow torus of an eccentric annular cross-section.
Hodograph computation and bound estimation for rational B-spline curves
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
It is necessary to compute the derivative and estimate the bound of rational B-spline curves in design system, which has not been studied to date. To improve the function of computer aided design (CAD) system, and to enhance the efficiency of different algorithms of rational B-spline curves, the representation of scaled hodograph and bound of derivative magnitude of uniform planar rational B-spline curves are derived by applying Dir function, which indicates the direction of Cartesian vector between homogeneous points, discrete B-spline theory and the formula of translating the product into a summation of B-spline functions. As an application of the result above,upper bound of parametric distance between any two points in a uniform planar rational B-spline curve is further presented.
Second-Order Cone Programming for P-Spline Simulation Metamodeling
Xia, Yu; Alizadeh, Farid
2015-01-01
This paper approximates simulation models by B-splines with a penalty on high-order finite differences of the coefficients of adjacent B-splines. The penalty prevents overfitting. The simulation output is assumed to be nonnegative. The nonnegative spline simulation metamodel is casted as a second-order cone programming model, which can be solved efficiently by modern optimization techniques. The method is implemented in MATLAB/GNU Octave.
Trivariate Polynomial Natural Spline for 3D Scattered Data Hermit Interpolation
Institute of Scientific and Technical Information of China (English)
XU YING-XIANG; GUAN L(U)-TAI; XU WEI-ZHI
2012-01-01
Consider a kind of Hermit interpolation for scattered data of 3D by trivariate polynomial natural spline,such that the objective energy functional (with natural boundary conditions) is minimal.By the spline function methods in Hilbert space and variational theory of splines,the characters of the interpolation solution and how to construct it are studied.One can easily find that the interpolation solution is a trivariate polynomial natural spline.Its expression is simple and the coefficients can be decided by a linear system.Some numerical examples are presented to demonstrate our methods.
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
A fast direct point-by-point generating algorithm for B Spline curves and surfaces
Institute of Scientific and Technical Information of China (English)
LI Zhong; HAN Dan-fu
2005-01-01
Traditional generating algorithms for B Spline curves and surfaces require approximation methods where how to increment the parameter to get the best approximation is problematic; or they take the pixel-based method needing matrix transformation from B Spline representation to Bezier form. Here, a fast, direct point-by-point generating algorithm for B Spline curves and surfaces is presented. The algorithm does not need matrix transformation, can be used for uniform or nonuniform B Spline curves and surfaces of any degree, and has high generating speed and good rendering accuracy.
International Nuclear Information System (INIS)
Spline functions have come into increasingly wide use recently in the solution of boundary-value problems of the theory of elasticity of plates and shells. This development stems from the advantages offered by spline approximations compared to other methods. Among the most important advantages are the following: (1) the behavior of the spline in the neighborhood of a point has no effect on the behavior of the spline as a whole; (2) spline interpolation converges well compared to polynomial interpolation; (3) algorithms for spline construction are simple and convenient to use. The use of spline functions to solve linear two-dimensional problems on the stress-strain state of shallow shells and plates that are rectangular in plan has proven their efficiency and made it possible to expand the range of problems that can be solved. The approach proposed in these investigations is based on reducing a linear two-dimensional problem to a unidimensional problem by the spline unidimensional problem by the method of discrete orthogonalization in the other coordinate direction. Such an approach makes it possible to account for local and edge effects in the stress state of plates and shells and obtain reliable solutions with complex boundary conditions. In the present study, we take the above approach, employing spline functions to solve linear problems, and use it to also solve geometrically nonlinear problems of the statics of shallow shells and plates with variable parameters
Automatic Shape Control of Triangular B-Splines of Arbitrary Topology
Institute of Scientific and Technical Information of China (English)
Ying He; Xian-Feng Gu; Hong Qin
2006-01-01
Triangular B-splines are powerful and flexible in modeling a broader class of geometric objects defined over arbitrary, non-rectangular domains. Despite their great potential and advantages in theory, practical techniques and computational tools with triangular B-splines are less-developed. This is mainly because users have to handle a large number of irregularly distributed control points over arbitrary triangulation. In this paper, an automatic and efficient method is proposed to generate visually pleasing, high-quality triangular B-splines of arbitrary topology. The experimental results on several real datasets show that triangular B-splines are powerful and effective in both theory and practice.
Smoothing internal migration age profiles for comparative research
Directory of Open Access Journals (Sweden)
Aude Bernard
2015-05-01
Full Text Available Background: Age patterns are a key dimension to compare migration between countries and over time. Comparative metrics can be reliably computed only if data capture the underlying age distribution of migration. Model schedules, the prevailing smoothing method, fit a composite exponential function, but are sensitive to function selection and initial parameter setting. Although non-parametric alternatives exist, their performance is yet to be established. Objective: We compare cubic splines and kernel regressions against model schedules by assessingwhich method provides an accurate representation of the age profile and best performs on metrics for comparing aggregate age patterns. Methods: We use full population microdata for Chile to perform 1,000 Monte-Carlo simulations for nine sample sizes and two spatial scales. We use residual and graphic analysis to assess model performance on the age and intensity at which migration peaks and the evolution of migration age patterns. Results: Model schedules generate a better fit when (1 the expected distribution of the age profile is known a priori, (2 the pre-determined shape of the model schedule adequately describes the true age distribution, and (3 the component curves and initial parameter values can be correctly set. When any of these conditions is not met, kernel regressions and cubic splines offer more reliable alternatives. Conclusions: Smoothing models should be selected according to research aims, age profile characteristics, and sample size. Kernel regressions and cubic splines enable a precise representation of aggregate migration age profiles for most sample sizes, without requiring parameter setting or imposing a pre-determined distribution, and therefore facilitate objective comparison.
Some extremal properties of multivariate polynomial splines in the metric Lp (Rd )
Institute of Scientific and Technical Information of China (English)
LlU; Yongping(
2001-01-01
［1］Li Chun, Infinite dimensional widths of function classes, J. Approx. Theory, 1992, 69(1): 15-34.［2］Luo Junbo, Liu Yongping, Average width and optimal recovery of some anisotropic classes of smooth functions defined on the Euclidean space Bd, Northeast Math. J. , 1999, 15(4): 423-432.［3］Schoenberg, I. J., Cardinal interpolation and spline functions Ⅱ. Interpolation of data of power growth, J. Approx. Theory, 1972, 6(4): 404-420.［4］Fang Gensun, Liu Yongping, Average width and optimal interpolation of the Sobolev-Wiener class Wpd (B) in the metric Lq(Y), J. Approx Theory, 1993, 74(3): 335-352.［5］Pinkus, A., N-widths in Approximation Theory, New York: Springer-Verlag, 1985.［6］Foumier, J. J. F., Stewart, J., Amalgams of Lp and lq, Bull. Amer. Math. Soc., 1985, 13(1): 1-12.
Adaptive Predistortion Using Cubic Spline Nonlinearity Based Hammerstein Modeling
Wu, Xiaofang; Shi, Jianghong
In this paper, a new Hammerstein predistorter modeling for power amplifier (PA) linearization is proposed. The key feature of the model is that the cubic splines, instead of conventional high-order polynomials, are utilized as the static nonlinearities due to the fact that the splines are able to represent hard nonlinearities accurately and circumvent the numerical instability problem simultaneously. Furthermore, according to the amplifier's AM/AM and AM/PM characteristics, real-valued cubic spline functions are utilized to compensate the nonlinear distortion of the amplifier and the following finite impulse response (FIR) filters are utilized to eliminate the memory effects of the amplifier. In addition, the identification algorithm of the Hammerstein predistorter is discussed. The predistorter is implemented on the indirect learning architecture, and the separable nonlinear least squares (SNLS) Levenberg-Marquardt algorithm is adopted for the sake that the separation method reduces the dimension of the nonlinear search space and thus greatly simplifies the identification procedure. However, the convergence performance of the iterative SNLS algorithm is sensitive to the initial estimation. Therefore an effective normalization strategy is presented to solve this problem. Simulation experiments were carried out on a single-carrier WCDMA signal. Results show that compared to the conventional polynomial predistorters, the proposed Hammerstein predistorter has a higher linearization performance when the PA is near saturation and has a comparable linearization performance when the PA is mildly nonlinear. Furthermore, the proposed predistorter is numerically more stable in all input back-off cases. The results also demonstrate the validity of the convergence scheme.
Granade, Christopher; Cory, D G
2015-01-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
Noncausal Bayesian Vector Autoregression
DEFF Research Database (Denmark)
Lanne, Markku; Luoto, Jani
We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...
Bayesian Lensing Shear Measurement
Bernstein, Gary M
2013-01-01
We derive an estimator of weak gravitational lensing shear from background galaxy images that avoids noise-induced biases through a rigorous Bayesian treatment of the measurement. The Bayesian formalism requires a prior describing the (noiseless) distribution of the target galaxy population over some parameter space; this prior can be constructed from low-noise images of a subsample of the target population, attainable from long integrations of a fraction of the survey field. We find two ways to combine this exact treatment of noise with rigorous treatment of the effects of the instrumental point-spread function and sampling. The Bayesian model fitting (BMF) method assigns a likelihood of the pixel data to galaxy models (e.g. Sersic ellipses), and requires the unlensed distribution of galaxies over the model parameters as a prior. The Bayesian Fourier domain (BFD) method compresses galaxies to a small set of weighted moments calculated after PSF correction in Fourier space. It requires the unlensed distributi...
Gravity Aided Navigation Precise Algorithm with Gauss Spline Interpolation
Directory of Open Access Journals (Sweden)
WEN Chaobin
2015-01-01
Full Text Available The gravity compensation of error equation thoroughly should be solved before the study on gravity aided navigation with high precision. A gravity aided navigation model construction algorithm based on research the algorithm to approximate local grid gravity anomaly filed with the 2D Gauss spline interpolation is proposed. Gravity disturbance vector, standard gravity value error and Eotvos effect are all compensated in this precision model. The experiment result shows that positioning accuracy is raised by 1 times, the attitude and velocity accuracy is raised by 1～2 times and the positional error is maintained from 100~200 m.
Recovery of band limited functions via cardinal splines
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The main result of this paper asserts that if a function f is in the class Bπ,p, 1＜p＜∞; that is, those p-integrable functions whose Fourier transforms are supported in the interval ［-π, π］, then f and its derivatives f(j), j=1, 2, …， can be recovered from its sampling sequence ｛f(k)｝ via the cardinal interpolating spline of degree m in the metric of Lq(R), 1＜p=q＜∞, or 1＜p＜q≤∞.
Recovery of shapes: hypermodels and Bayesian learning
International Nuclear Information System (INIS)
We discuss the problem of recovering an image from its blurred and noisy copy with the additional information that the image consists of simple shapes with sharp edges. An iterative algorithm is given, based on the idea of updating the Tikhonov type smoothness penalty on the basis of the previous estimate. This algorithm is discussed in the framework of Bayesian hypermodels and it is shown that the approach can be justified as a sequential iterative scheme for finding the mode of the posterior density. An effective numerical algorithm based on preconditioned Krylov subspace iterations is suggested and demonstrated with a computed example
Malicious Bayesian Congestion Games
Gairing, Martin
2008-01-01
In this paper, we introduce malicious Bayesian congestion games as an extension to congestion games where players might act in a malicious way. In such a game each player has two types. Either the player is a rational player seeking to minimize her own delay, or - with a certain probability - the player is malicious in which case her only goal is to disturb the other players as much as possible. We show that such games do in general not possess a Bayesian Nash equilibrium in pure strategies (i.e. a pure Bayesian Nash equilibrium). Moreover, given a game, we show that it is NP-complete to decide whether it admits a pure Bayesian Nash equilibrium. This result even holds when resource latency functions are linear, each player is malicious with the same probability, and all strategy sets consist of singleton sets. For a slightly more restricted class of malicious Bayesian congestion games, we provide easy checkable properties that are necessary and sufficient for the existence of a pure Bayesian Nash equilibrium....
PEMODELAN REGRESI SPLINE (Studi Kasus: Herpindo Jaya Cabang Ngaliyan
Directory of Open Access Journals (Sweden)
I MADE BUDIANTARA PUTRA
2015-06-01
Full Text Available Regression analysis is a method of data analysis to describe the relationship between response variables and predictor variables. There are two approaches to estimating the regression function. They are parametric and nonparametric approaches. The parametric approach is used when the relationship between the predictor variables and the response variables are known or the shape of the regression curve is known. Meanwhile, the nonparametric approach is used when the form of the relationship between the response and predictor variables is unknown or no information about the form of the regression function. The aim of this study are to determine the best spline nonparametric regression model on data of quality of the product, price, and advertising on purchasing decisions of Yamaha motorcycle with optimal knots point and to compare it with the multiple regression linear based on the coefficient of determination (R2 and mean square error (MSE. Optimal knot points are defined by two point knots. The result of this analysis is that for this data multiple regression linear is better than the spline regression one.
SPLINE LINEAR REGRESSION USED FOR EVALUATING FINANCIAL ASSETS 1
Directory of Open Access Journals (Sweden)
Liviu GEAMBAŞU
2010-12-01
Full Text Available One of the most important preoccupations of financial markets participants was and still is the problem of determining more precise the trend of financial assets prices. For solving this problem there were written many scientific papers and were developed many mathematical and statistical models in order to better determine the financial assets price trend. If until recently the simple linear models were largely used due to their facile utilization, the financial crises that affected the world economy starting with 2008 highlight the necessity of adapting the mathematical models to variation of economy. A simple to use model but adapted to economic life realities is the spline linear regression. This type of regression keeps the continuity of regression function, but split the studied data in intervals with homogenous characteristics. The characteristics of each interval are highlighted and also the evolution of market over all the intervals, resulting reduced standard errors. The first objective of the article is the theoretical presentation of the spline linear regression, also referring to scientific national and international papers related to this subject. The second objective is applying the theoretical model to data from the Bucharest Stock Exchange
Jarrow, Robert A
2014-01-01
This article reviews the forward rate curve smoothing literature. The key contribution of this review is to link the static curve fitting exercise to the dynamic and arbitrage-free models of the term structure of interest rates. As such, this review introduces more economics to an almost exclusively mathematical exercise, and it identifies new areas for research related to forward rate curve smoothing.
Biswas, Kingshook
2009-01-01
We use techniques of tube-log Riemann surfaces due to R.Perez-Marco to construct a hedgehog containing smooth $C^{\\infty}$ combs. The hedgehog is a common hedgehog for a family of commuting non-linearisable holomorphic maps with a common indifferent fixed point. The comb is made up of smooth curves, and is transversally bi-H\\"older regular.
Vascular smooth muscle cells (SMCs) originate from multiple types of progenitor cells. In the embryo, the most well-studied SMC progenitor is the cardiac neural crest stem cell. Smooth muscle differentiation in the neural crest lineage is controlled by a combination of cell intrinsic factors, includ...
A class of compactly supported symmetric/antisymmetric B-spline wavelets
Institute of Scientific and Technical Information of China (English)
YANG Shouzhi; LOU Zengjian
2005-01-01
An algorithm for constructing a class of compactly supported symmetric/antisymmetric B-spline wavelets is presented.For any m th order and k th order cardinal B-spline Nm (x), Nk (x), if m + k is an even integer, the corresponding m th order B-spline wavelets ψkm (x) can be constructed, which are compactly supported symmetric/antisymmetric. In addition, if ψkm (x), m ＞ 1 is m th Bspline wavelet associated with two spline functions Nm (x) and Nk (x), then (ψkm (x))′( x ) is m - 1th B-spline wavelet associated with Nm-1(x) and Nk+1(x), i.e. (ψkm(x))′(x) =22ψk+1m-1(x). Similarly, ∫x0 ψkm(t)dt, k ＞1 is m + 1th B-spline wavelet associated with Nm + 1 (x) and Nk-1 (x). Using this method, we recovered Chui and Wang' s spline wavelets. Since a class of B-spline wavelets are symmetric/antisymmetric, their linear phase property is assured. Several examples are also presented.
B-spline collocation methods for numerical solutions of the Burgers' equation
İdris Dağ; Dursun Irk; Ali Şahin
2005-01-01
Both time- and space-splitted Burgers' equations are solved numerically. Cubic B-spline collocation method is applied to the time-splitted Burgers' equation. Quadratic B-spline collocation method is used to get numerical solution of the space-splitted Burgers' equation. The results of both schemes are compared for some test problems.
Mitra, Jhimli; Marti, Robert; Oliver, Arnau; Llado, Xavier; Vilanova, Joan C.; Meriaudeau, Fabrice
2011-03-01
This paper provides a comparison of spline-based registration methods applied to register interventional Trans Rectal Ultrasound (TRUS) and pre-acquired Magnetic Resonance (MR) prostate images for needle guided prostate biopsy. B-splines and Thin-plate Splines (TPS) are the most prevalent spline-based approaches to achieve deformable registration. Pertaining to the strategic selection of correspondences for the TPS registration, we use an automatic method already proposed in our previous work to generate correspondences in the MR and US prostate images. The method exploits the prostate geometry with the principal components of the segmented prostate as the underlying framework and involves a triangulation approach. The correspondences are generated with successive refinements and Normalized Mutual Information (NMI) is employed to determine the optimal number of correspondences required to achieve TPS registration. B-spline registration with successive grid refinements are consecutively applied for a significant comparison of the impact of the strategically chosen correspondences on the TPS registration against the uniform B-spline control grids. The experimental results are validated on 4 patient datasets. Dice Similarity Coefficient (DSC) is used as a measure of the registration accuracy. Average DSC values of 0.97+/-0.01 and 0.95+/-0.03 are achieved for the TPS and B-spline registrations respectively. B-spline registration is observed to be more computationally expensive than the TPS registration with average execution times of 128.09 +/- 21.7 seconds and 62.83 +/- 32.77 seconds respectively for images with maximum width of 264 pixels and a maximum height of 211 pixels.
Recommended practices for spline usage in CAD/CAM systems: CADCAM-007
Energy Technology Data Exchange (ETDEWEB)
Fletcher, S.K.
1984-04-01
Sandia National Laboratories has been assigned Lead Lab responsibility for integrating CAD/CAM activities throughout the DOE Nuclear Weapons Complex (NWC) and automating exchange of product definition. Transfer of splines between CAD/CAM systems presents a special problem due to the use of different spline interpolation schemes in these systems. Automated exchange via IGES (Initial Graphics Exchange Specification, ANSI Y14.26M-1981) shows promise but does not yet provide a usable data path for NWC spline needs. Data exchange today is primarily via hard copy drawings with manual data reentry and spline recomputation. In this environment, spline problems can be minimized by following the recommended practices set forth in this report.
Bayesian Mars for uncertainty quantification in stochastic transport problems
International Nuclear Information System (INIS)
We present a method for estimating solutions to partial differential equations with uncertain parameters using a modification of the Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator. The BMARS algorithm uses Markov chain Monte Carlo (MCMC) to construct a basis function composed of polynomial spline functions, for which derivatives and integrals are straightforward to compute. We use these calculations and a modification of the curve-fitting BMARS algorithm to search for a basis function (response surface) which, in combination with its derivatives/integrals, satisfies a governing differential equation and specified boundary condition. We further show that this fit can be improved by enforcing a conservation or other physics-based constraint. Our results indicate that estimates to solutions of simple first order partial differential equations (without uncertainty) can be efficiently computed with very little regression error. We then extend the method to estimate uncertainties in the solution to a pure absorber transport problem in a medium with uncertain cross-section. We describe and compare two strategies for propagating the uncertain cross-section through the BMARS algorithm; the results from each method are in close comparison with analytic results. We discuss the scalability of the algorithm to parallel architectures and the applicability of the two strategies to larger problems with more degrees of uncertainty. (author)
Chen, T.; Gong, X.
2011-12-01
In inversion of geodetic data for distribution of fault slip minimizing the first or second order derivatives of slip across fault plane is generally employed to smooth slips of neighboring patches.Smoothing parameter is subjective selected to determine the relative weight placed on fitting data versus smoothing the slip distribution.We use the Fully Bayesian Inversion method(Fukuda,2008)to simultaneously estimate the slip distribution and smoothing parameter objectively in a Bayesian framework. The distributed slips,the posterior probability density function and the smoothing parameter is formulated with Bayes' theorem and sampled with a Markov chain Monte Carlo method. Here We will apply this method to Coseismic and Postseismic displacement data from the 2007 Solomon Islands Earthquake and compare the results of this method with generally favored method.
Fitting Derivative Function Based on Penalized Regression Spline%基于惩罚回归样条的函数导数拟合
Institute of Scientific and Technical Information of China (English)
关海洋; 唐燕武; 杨联强
2015-01-01
在函数形式未知，而已知该函数的带误差的离散数据点情况下，运用基于 p次截断幂基的惩罚回归样条拟合数据点，并在拟合出的曲线基础上求出函数的一阶导数。该方法将经典最小二乘法和惩罚样条方法进行结合，既考虑了拟合优度，又兼顾拟合曲线的光滑性，模拟和实际应用的例子显示此种方法效果较理想。%When the function is not identified but its discrete data points are given , fitting function based on penalized spline with pth-degree truncated power basis is constructed, and the first derivative of function is given.The method combines classical ordinary least squares and penalized spline smoothing , both the goodness and the smoothness of fitting curve are considered , simu-lations and application show its good efficiency .
Bayesian least squares deconvolution
Ramos, A Asensio
2015-01-01
Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Bayesian least squares deconvolution
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Hybrid Batch Bayesian Optimization
Azimi, Javad; Fern, Xiaoli
2012-01-01
Bayesian Optimization aims at optimizing an unknown non-convex/concave function that is costly to evaluate. We are interested in application scenarios where concurrent function evaluations are possible. Under such a setting, BO could choose to either sequentially evaluate the function, one input at a time and wait for the output of the function before making the next selection, or evaluate the function at a batch of multiple inputs at once. These two different settings are commonly referred to as the sequential and batch settings of Bayesian Optimization. In general, the sequential setting leads to better optimization performance as each function evaluation is selected with more information, whereas the batch setting has an advantage in terms of the total experimental time (the number of iterations). In this work, our goal is to combine the strength of both settings. Specifically, we systematically analyze Bayesian optimization using Gaussian process as the posterior estimator and provide a hybrid algorithm t...
Loredo, T J
2004-01-01
I describe a framework for adaptive scientific exploration based on iterating an Observation--Inference--Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data--measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object--show the approach can significantly improve observational eff...
Bayesian Exploratory Factor Analysis
DEFF Research Database (Denmark)
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...
Bayesian multiple target tracking
Streit, Roy L
2013-01-01
This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements
Bayesian and frequentist inequality tests
David M. Kaplan; Zhuo, Longhao
2016-01-01
Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (and normal). We compare Bayesian and frequentist hypothesis tests of inequality restrictions in such cases. For finite-dimensional parameters, if the null hypothesis is that the parameter vector lies in a certain half-space, then the Bayesian test has (frequentist) size $\\alpha$; if the null hypothesis is any other convex subspace, then the Bayesian test...
Fast Selection of Spectral Variables with B-Spline Compression
Rossi, Fabrice; Wertz, Vincent; Meurens, Marc; Verleysen, Michel
2007-01-01
The large number of spectral variables in most data sets encountered in spectral chemometrics often renders the prediction of a dependent variable uneasy. The number of variables hopefully can be reduced, by using either projection techniques or selection methods; the latter allow for the interpretation of the selected variables. Since the optimal approach of testing all possible subsets of variables with the prediction model is intractable, an incremental selection approach using a nonparametric statistics is a good option, as it avoids the computationally intensive use of the model itself. It has two drawbacks however: the number of groups of variables to test is still huge, and colinearities can make the results unstable. To overcome these limitations, this paper presents a method to select groups of spectral variables. It consists in a forward-backward procedure applied to the coefficients of a B-Spline representation of the spectra. The criterion used in the forward-backward procedure is the mutual infor...
Prediction of longitudinal dispersion coefficient using multivariate adaptive regression splines
Indian Academy of Sciences (India)
Amir Hamzeh Haghiabi
2016-07-01
In this paper, multivariate adaptive regression splines (MARS) was developed as a novel soft-computingtechnique for predicting longitudinal dispersion coefficient (DL) in rivers. As mentioned in the literature,experimental dataset related to DL was collected and used for preparing MARS model. Results of MARSmodel were compared with multi-layer neural network model and empirical formulas. To define the mosteffective parameters on DL, the Gamma test was used. Performance of MARS model was assessed bycalculation of standard error indices. Error indices showed that MARS model has suitable performanceand is more accurate compared to multi-layer neural network model and empirical formulas. Results ofthe Gamma test and MARS model showed that flow depth (H) and ratio of the mean velocity to shearvelocity (u/u^∗) were the most effective parameters on the DL.
Preference learning with evolutionary Multivariate Adaptive Regression Spline model
DEFF Research Database (Denmark)
Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll
2015-01-01
This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for human decision making. Learning models from pairwise preference data is however an NP-hard problem. Therefore, constructing models that can effectively learn such data is a challenging task. Models are usually constructed with accuracy being the most important factor. Another vitally important aspect...... that is usually given less attention is expressiveness, i.e. how easy it is to explain the relationship between the model input and output. Most machine learning techniques are focused either on performance or on expressiveness. This paper employ MARS models which have the advantage of being a powerful method...
Spline-based automatic path generation of welding robot
Institute of Scientific and Technical Information of China (English)
Niu Xuejuan; Li Liangyu
2007-01-01
This paper presents a flexible method for the representation of welded seam based on spline interpolation. In this method, the tool path of welding robot can be generated automatically from a 3D CAD model. This technique has been implemented and demonstrated in the FANUC Arc Welding Robot Workstation. According to the method, a software system is developed using VBA of SolidWorks 2006. It offers an interface between SolidWorks and ROBOGUIDE, the off-line programming software of FANUC robot. It combines the strong modeling function of the former and the simulating function of the latter. It also has the capability of communication with on-line robot. The result data have shown its high accuracy and strong reliability in experiments. This method will improve the intelligence and the flexibility of the welding robot workstation.
Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation
Directory of Open Access Journals (Sweden)
Budi I. Setiawan
2007-09-01
Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.
From cardinal spline wavelet bases to highly coherent dictionaries
International Nuclear Information System (INIS)
Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation. (fast track communication)
From cardinal spline wavelet bases to highly coherent dictionaries
Energy Technology Data Exchange (ETDEWEB)
Andrle, Miroslav; Rebollo-Neira, Laura [Aston University, Birmingham B4 7ET (United Kingdom)
2008-05-02
Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation. (fast track communication)
Sebastián MV; Navascués MA
2006-01-01
Fractal methodology provides a general frame for the understanding of real-world phenomena. In particular, the classical methods of real-data interpolation can be generalized by means of fractal techniques. In this paper, we describe a procedure for the construction of smooth fractal functions, with the help of Hermite osculatory polynomials. As a consequence of the process, we generalize any smooth interpolant by means of a family of fractal functions. In particular, the elements of the cla...
Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System
Directory of Open Access Journals (Sweden)
Guang Zhao
2008-01-01
Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.
A. Korattikara; V. Rathod; K. Murphy; M. Welling
2015-01-01
We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities p(y|x, D), e.g., for applications involving bandits or active learning. One simple ap
Bayesian logistic regression analysis
Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.
2012-01-01
In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an
Loredo, Thomas J.
2004-04-01
I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.
DEFF Research Database (Denmark)
Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;
2015-01-01
A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...
Bayesian Independent Component Analysis
DEFF Research Database (Denmark)
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...
DEFF Research Database (Denmark)
Hartelius, Karsten; Carstensen, Jens Michael
2003-01-01
A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...
Quintic nonpolynomial spline solutions for fourth order two-point boundary value problem
Ramadan, M. A.; Lashien, I. F.; Zahra, W. K.
2009-04-01
In this paper, we develop quintic nonpolynomial spline methods for the numerical solution of fourth order two-point boundary value problems. Using this spline function a few consistency relations are derived for computing approximations to the solution of the problem. The present approach gives better approximations and generalizes all the existing polynomial spline methods up to order four. This approach has less computational cost. Convergence analysis of these methods is discussed. Two numerical examples are included to illustrate the practical usefulness of our methods.
Energy Technology Data Exchange (ETDEWEB)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.; Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.
2008-07-14
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensity that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting
B-splines on 3-D tetrahedron partition in four-directional mesh
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
It is more difficult to construct 3-D splines than in 2-D case. Some results in the three directional meshes of bivariate case have been e xtended to 3-D case and corresponding tetrahedron partition has been constructed. The support of related Bsplines and their recurrent formulas on integration and differentiationdifference are obtained. The results of this paper can be extended into higher dimension spaces, and can be also used in wavelet analysis, because of the relationship between spline and wavelets.
MECHANICS ANALYSIS ON PRECISE FORMING PROCESS OF EXTERNAL SPLINE COLD ROLLING
Institute of Scientific and Technical Information of China (English)
ZHANG Dawei; LI Yongtang; FU Jianhua; ZHENG Quangang
2007-01-01
According to the suitable assumption, the deformation process of external spline cold rolling is analyzed. By the graphing method, the slip-line field of plastically deforming area in process of external spline cold rolling is set up. Different friction-conditions are used in different contact areas in order to realistically reflect the actual situation. The unit average pressure on contact surface of the rolling process is solved according to the stress filed theory of slip-line. And the formulae of the rolling-force and rolling-moment are established. The theoretical result is well consistent with the finite element analysis. A theoretical basis is provided for the precise forming process of spline cold rolling and the production of external splined shafts.
Back, Aurore; Sonnendrücker, Eric
2013-01-01
The notion of B-spline based discrete differential forms is recalled and along with a Finite Element Hodge operator, it is used to design new numerical methods for solving the Vlasov-Poisson equations.
Nonlinear Spline Kernel-based Partial Least Squares Regression Method and Its Application
Institute of Scientific and Technical Information of China (English)
JIA Jin-ming; WEN Xiang-jun
2008-01-01
Inspired by the traditional Wold's nonlinear PLS algorithm comprises of NIPALS approach and a spline inner function model,a novel nonlinear partial least squares algorithm based on spline kernel(named SK-PLS)is proposed for nonlinear modeling in the presence of multicollinearity.Based on the iuner-product kernel spanned by the spline basis functions with infinite numher of nodes,this method firstly maps the input data into a high dimensional feature space,and then calculates a linear PLS model with reformed NIPALS procedure in the feature space and gives a unified framework of traditional PLS"kernel"algorithms in consequence.The linear PLS in the feature space corresponds to a nonlinear PLS in the original input (primal)space.The good approximating property of spline kernel function enhances the generalization ability of the novel model,and two numerical experiments are given to illustrate the feasibility of the proposed method.
Divisibility, Smoothness and Cryptographic Applications
Naccache, David; Shparlinski, Igor E.
2008-01-01
This paper deals with products of moderate-size primes, familiarly known as smooth numbers. Smooth numbers play a crucial role in information theory, signal processing and cryptography. We present various properties of smooth numbers relating to their enumeration, distribution and occurrence in various integer sequences. We then turn our attention to cryptographic applications in which smooth numbers play a pivotal role.
Revealed smooth nontransitive preferences
DEFF Research Database (Denmark)
Keiding, Hans; Tvede, Mich
2013-01-01
consumption bundle, all strictly preferred bundles are more expensive than the observed bundle. Our main result is that data sets can be rationalized by a smooth nontransitive preference relation if and only if prices can normalized such that the law of demand is satisﬁed. Market data sets consist of ﬁnitely...... many observations of price vectors, lists of individual incomes and aggregate demands. We apply our main result to characterize market data sets consistent with equilibrium behaviour of pure-exchange economies with smooth nontransitive consumers....
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
DEFF Research Database (Denmark)
Mørup, Morten; Schmidt, Mikkel N
2012-01-01
Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model...... for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities...... consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled....
Numerical Solutions for Convection-Diffusion Equation through Non-Polynomial Spline
Directory of Open Access Journals (Sweden)
Ravi Kanth A.S.V.
2016-01-01
Full Text Available In this paper, numerical solutions for convection-diffusion equation via non-polynomial splines are studied. We purpose an implicit method based on non-polynomial spline functions for solving the convection-diffusion equation. The method is proven to be unconditionally stable by using Von Neumann technique. Numerical results are illustrated to demonstrate the efficiency and stability of the purposed method.
B-SPLINE-BASED SVM MODEL AND ITS APPLICATIONS TO OIL WATER-FLOODED STATUS IDENTIFICATION
Institute of Scientific and Technical Information of China (English)
Shang Fuhua; Zhao Tiejun; Yi Xiongying
2007-01-01
A method of B-spline transform for signal feature extraction is developed. With the B-spline,the log-signal space is mapped into the vector space. An efficient algorithm based on Support Vector Machine (SVM) to automatically identify the water-flooded status of oil-saturated stratum is described.The experiments show that this algorithm can improve the performances for the identification and the generalization in the case of a limited set of samples.
Numerical Method Using Cubic Trigonometric B-Spline Technique for Nonclassical Diffusion Problems
Muhammad Abbas; Ahmad Abd. Majid; Ahmad Izani Md Ismail; Abdur Rashid
2014-01-01
A new two-time level implicit technique based on cubic trigonometric B-spline is proposed for the approximate solution of a nonclassical diffusion problem with nonlocal boundary constraints. The standard finite difference approach is applied to discretize the time derivative while cubic trigonometric B-spline is utilized as an interpolating function in the space dimension. The technique is shown to be unconditionally stable using the von Neumann method. Several numerical examples are discusse...
THE BLOSSOM APPROACH TO THE DIMENSION OF THE BIVARIATE SPLINE SPACE
Institute of Scientific and Technical Information of China (English)
Yu-yu Feng; Zhi-bin Chen
2000-01-01
The dimension of the bivariate spline space S r n(Δ) may depend on geometric properties of triangulation Δ, in particular if n is not much bigger than r. In the paper, the blossom approach to the dimension count is outlined. It leads to the symbolic algorithm that gives the answer if a triangulation is singular or not. The approach is demonstrated on the case of Morgan-Scott partition and twice differentiable splines.
Energy Technology Data Exchange (ETDEWEB)
Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael
2000-04-11
A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy
Brody, Samuel; Lapata, Mirella
2009-01-01
Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samp...
Bayesian Generalized Rating Curves
Helgi Sigurðarson 1985
2014-01-01
A rating curve is a curve or a model that describes the relationship between water elevation, or stage, and discharge in an observation site in a river. The rating curve is fit from paired observations of stage and discharge. The rating curve then predicts discharge given observations of stage and this methodology is applied as stage is substantially easier to directly observe than discharge. In this thesis a statistical rating curve model is proposed working within the framework of Bayesian...
Efficient Bayesian Phase Estimation
Wiebe, Nathan; Granade, Chris
2016-07-01
We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.
Bayesian theory and applications
Dellaportas, Petros; Polson, Nicholas G; Stephens, David A
2013-01-01
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...
Wiegerinck, Wim; Schoenaker, Christiaan; Duane, Gregory
2016-04-01
Recently, methods for model fusion by dynamically combining model components in an interactive ensemble have been proposed. In these proposals, fusion parameters have to be learned from data. One can view these systems as parametrized dynamical systems. We address the question of learnability of dynamical systems with respect to both short term (vector field) and long term (attractor) behavior. In particular we are interested in learning in the imperfect model class setting, in which the ground truth has a higher complexity than the models, e.g. due to unresolved scales. We take a Bayesian point of view and we define a joint log-likelihood that consists of two terms, one is the vector field error and the other is the attractor error, for which we take the L1 distance between the stationary distributions of the model and the assumed ground truth. In the context of linear models (like so-called weighted supermodels), and assuming a Gaussian error model in the vector fields, vector field learning leads to a tractable Gaussian solution. This solution can then be used as a prior for the next step, Bayesian attractor learning, in which the attractor error is used as a log-likelihood term. Bayesian attractor learning is implemented by elliptical slice sampling, a sampling method for systems with a Gaussian prior and a non Gaussian likelihood. Simulations with a partially observed driven Lorenz 63 system illustrate the approach.
Micropolar Fluids Using B-spline Divergence Conforming Spaces
Sarmiento, Adel
2014-06-06
We discretized the two-dimensional linear momentum, microrotation, energy and mass conservation equations from micropolar fluids theory, with the finite element method, creating divergence conforming spaces based on B-spline basis functions to obtain pointwise divergence free solutions [8]. Weak boundary conditions were imposed using Nitsche\\'s method for tangential conditions, while normal conditions were imposed strongly. Once the exact mass conservation was provided by the divergence free formulation, we focused on evaluating the differences between micropolar fluids and conventional fluids, to show the advantages of using the micropolar fluid model to capture the features of complex fluids. A square and an arc heat driven cavities were solved as test cases. A variation of the parameters of the model, along with the variation of Rayleigh number were performed for a better understanding of the system. The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.
Smoothed Particle Hydrodynamic Simulator
Energy Technology Data Exchange (ETDEWEB)
2016-10-05
This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.
Bayesian optimization for materials design
Frazier, Peter I.; Wang, Jialei
2015-01-01
We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets. Bayesian optimization guides the choice of experiments during materials design and discovery to find good material designs in as few experiments as possible. We focus on the case when materials designs are parameterized by a low-dimensional vector. Bayesian optimization is built on a statistical technique called Gaussian pro...
Bayesian Posteriors Without Bayes' Theorem
Hill, Theodore P
2012-01-01
The classical Bayesian posterior arises naturally as the unique solution of several different optimization problems, without the necessity of interpreting data as conditional probabilities and then using Bayes' Theorem. For example, the classical Bayesian posterior is the unique posterior that minimizes the loss of Shannon information in combining the prior and the likelihood distributions. These results, direct corollaries of recent results about conflations of probability distributions, reinforce the use of Bayesian posteriors, and may help partially reconcile some of the differences between classical and Bayesian statistics.
Kiani, M A; Sim, K S; Nia, M E; Tso, C P
2015-05-01
A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time.
Generalizing smooth transition autoregressions
DEFF Research Database (Denmark)
Chini, Emilio Zanetti
We introduce a variant of the smooth transition autoregression - the GSTAR model - capable to parametrize the asymmetry in the tails of the transition equation by using a particular generalization of the logistic function. A General-to-Specific modelling strategy is discussed in detail, with part......We introduce a variant of the smooth transition autoregression - the GSTAR model - capable to parametrize the asymmetry in the tails of the transition equation by using a particular generalization of the logistic function. A General-to-Specific modelling strategy is discussed in detail...... forecasting experiment to evaluate its point and density forecasting performances. In all the cases, the dynamic asymmetry in the cycle is efficiently captured by the new model. The GSTAR beats AR and STAR competitors in point forecasting, while this superiority becomes less evident in density forecasting...
Rolling Force and Rolling Moment in Spline Cold Rolling Using Slip-line Field Method
Institute of Scientific and Technical Information of China (English)
ZHANG Dawei; LI Yongtang; FU Jianhua; ZHENG Quangang
2009-01-01
Rolling force and rolling moment are prime process parameter of external spline cold rolling. However, the precise theoretical formulae of rolling force and rolling moment are still very fewer, and the determination of them depends on experience. In the present study, the mathematical models of rolling force and rolling moment are established based on stress field theory of slip-line. And the isotropic hardening is used to improve the yield criterion. Based on MATLAB program language environment, calculation program is developed according to mathematical models established. The rolling force and rolling moment could be predicted quickly via the calculation program, and then the reliability of the models is validated by FEM. Within the range of module of spline m=0.5-1.5 mm, pressure angle of reference circle α=30.0°-45.0°, and number of spline teeth Z=19-54, the rolling force and rolling moment in rolling process (finishing rolling is excluded) are researched by means of virtualizing orthogonal experiment design. The results of the present study indicate that:the influences of module and number of spline teeth on the maximum rolling force and rolling moment in the process are remarkable;in the case of pressure angle of reference circle is little, module of spline is great, and number of spline teeth is little, the peak value of rolling force in rolling process may appear in the midst of the process;the peak value of rolling moment in rolling process appears in the midst of the process, and then oscillator weaken to a stable value. The results of the present study may provide guidelines for the determination of power of the motor and the design of hydraulic system of special machine, and provide basis for the farther researches on the precise forming process of external spline cold rolling.
Congdon, Peter
2014-01-01
This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU
Computationally efficient Bayesian tracking
Aughenbaugh, Jason; La Cour, Brian
2012-06-01
In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.
Bayesian nonparametric data analysis
Müller, Peter; Jara, Alejandro; Hanson, Tim
2015-01-01
This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.
Bayesian Geostatistical Design
DEFF Research Database (Denmark)
Diggle, Peter; Lophaven, Søren Nymand
2006-01-01
locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model......This paper describes the use of model-based geostatistics for choosing the set of sampling locations, collectively called the design, to be used in a geostatistical analysis. Two types of design situation are considered. These are retrospective design, which concerns the addition of sampling...
Indian Academy of Sciences (India)
Benedictus Margaux
2015-05-01
Let be a scheme. Assume that we are given an action of the one dimensional split torus $\\mathbb{G}_{m,S}$ on a smooth affine -scheme $\\mathfrak{X}$. We consider the limit (also called attractor) subfunctor $\\mathfrak{X}_{}$ consisting of points whose orbit under the given action `admits a limit at 0’. We show that $\\mathfrak{X}_{}$ is representable by a smooth closed subscheme of $\\mathfrak{X}$. This result generalizes a theorem of Conrad et al. (Pseudo-reductive groups (2010) Cambridge Univ. Press) where the case when $\\mathfrak{X}$ is an affine smooth group and $\\mathbb{G}_{m,S}$ acts as a group automorphisms of $\\mathfrak{X}$ is considered. It also occurs as a special case of a recent result by Drinfeld on the action of $\\mathbb{G}_{m,S}$ on algebraic spaces (Proposition 1.4.20 of Drinfeld V, On algebraic spaces with an action of $\\mathfrak{G}_{m}$, preprint 2013) in case is of finite type over a field.
Inference in hybrid Bayesian networks
DEFF Research Database (Denmark)
Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael;
2009-01-01
Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....
Institute of Scientific and Technical Information of China (English)
吴宪祥; 郭宝龙; 王娟
2009-01-01
针对移动机器人路径规划问题,提出了一种基于粒了群三次样条优化的路径规划方法.借助三次样条连接描述路径,这样将路径规划问题转化为三次样条曲线的参数优化问题.借助粒了群优化算法快速收敛和全局寻优特性实现最优路径规划.实验结果表明:所提算法町以快速有效地实现障碍环境下机器人的无碰撞路径规划,规划路径平滑,利于机器人的运动控制.%A novel algorithm based on particle swarm optimization (PSO) of cubic splines is proposed for mobile robot path planning. The path is described by string of cubic splines, thus the path planning is equivalent to parameter optimization of particular cubic splines. PSO is introduced to get the optimal path for its fast convergence and global search character. Ex-perimental results show that a collision-avoidance path can be found fleetly and effectively among obstacles by the proposed algorithm. The planned path is smooth which is useful for robot motion control.
Bayesian Inference on Gravitational Waves
Directory of Open Access Journals (Sweden)
Asad Ali
2015-12-01
Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.
Côrtes, A.M.A.
2015-02-20
The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf-sup stable and pointwise divergence-free. When applied to discretized Stokes equations, these spaces generate a symmetric and indefinite saddle-point linear system. Krylov subspace methods are usually the most efficient procedures to solve such systems. One of such methods, for symmetric systems, is the Minimum Residual Method (MINRES). However, the efficiency and robustness of Krylov subspace methods is closely tied to appropriate preconditioning strategies. For the discrete Stokes system, in particular, block-diagonal strategies provide efficient preconditioners. In this article, we compare the performance of block-diagonal preconditioners for several block choices. We verify how the eigenvalue clustering promoted by the preconditioning strategies affects MINRES convergence. We also compare the number of iterations and wall-clock timings. We conclude that among the building blocks we tested, the strategy with relaxed inner conjugate gradients preconditioned with incomplete Cholesky provided the best results.
Approximating Spline filter: New Approach for Gaussian Filtering in Surface Metrology
Directory of Open Access Journals (Sweden)
Hao Zhang
2009-10-01
Full Text Available This paper presents a new spline filter named approximating spline filter for surface metrology. The purpose is to provide a new approach of Gaussian filter and evaluate the characteristics of an engineering surface more accurately and comprehensively. First, the configuration of approximating spline filter is investigated, which describes that this filter inherits all the merits of an ordinary spline filter e.g. no phase distortion and no end distortion. Then, the approximating coefficient selection is discussed, which specifies an important property of this filter-the convergence to Gaussian filter. The maximum approximation deviation between them can be controlled below 4.36% , moreover, be decreased to less than 1% when cascaded. Since extended to 2 dimensional (2D filter, the transmission deviation yields within -0.63% : +1.48% . It is proved that the approximating spline filter not only achieves the transmission characteristic of Gaussian filter, but also alleviates the end effect on a data sequence. The whole computational procedure is illustrated and applied to a work piece to acquire mean line whereas a simulated surface to mean surface. These experimental results indicate that this filtering algorithm for 11200 profile points and 2000 × 2000 form data, only spends 8ms and 2.3s respectively.
Directory of Open Access Journals (Sweden)
Bush William S
2009-12-01
Full Text Available Abstract Background Gene-centric analysis tools for genome-wide association study data are being developed both to annotate single locus statistics and to prioritize or group single nucleotide polymorphisms (SNPs prior to analysis. These approaches require knowledge about the relationships between SNPs on a genotyping platform and genes in the human genome. SNPs in the genome can represent broader genomic regions via linkage disequilibrium (LD, and population-specific patterns of LD can be exploited to generate a data-driven map of SNPs to genes. Methods In this study, we implemented LD-Spline, a database routine that defines the genomic boundaries a particular SNP represents using linkage disequilibrium statistics from the International HapMap Project. We compared the LD-Spline haplotype block partitioning approach to that of the four gamete rule and the Gabriel et al. approach using simulated data; in addition, we processed two commonly used genome-wide association study platforms. Results We illustrate that LD-Spline performs comparably to the four-gamete rule and the Gabriel et al. approach; however as a SNP-centric approach LD-Spline has the added benefit of systematically identifying a genomic boundary for each SNP, where the global block partitioning approaches may falter due to sampling variation in LD statistics. Conclusion LD-Spline is an integrated database routine that quickly and effectively defines the genomic region marked by a SNP using linkage disequilibrium, with a SNP-centric block definition algorithm.
Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Keller, J.; Errichello, R.; Halse, C.
2013-12-01
Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.
Directory of Open Access Journals (Sweden)
Neng Wan
2014-01-01
Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.
Application of Piecewise Cubic B-Spline%过两端点分段三次 B 样条方法应用研究*
Institute of Scientific and Technical Information of China (English)
王争争
2015-01-01
通过引入约束点 P0和常量 r，构建过两端点分段三次B样条曲线并推出衔接点光滑衔接条件。应用过两端点分段三次B样条方法可以构建直线、三角形、四边形及蛋形画法，并通过消齿光顺得到理想效果。实现图形的平移、缩放和旋转，通过逆时针、顺时针旋转计算消除偏差，保形效果理想。按顺时针方向生成闭曲线并记录轨迹点位置数据，方便平面上闭曲线对象间关系的计算，并得到布尔运算结果。应用该方法可以构建空间图形，实现颜色渐变效果理想。%By introducing the constraint point P0 and constant r ,two endpoints piecewise cubic B spline curve is built and some smooth cohesion terms are introduced .Application of two endpoints piecewise cubic B spline method can build straight lines ,triangles ,quadrilateral and egg painting .Through the elimination of tooth smoothing ,ideal effect is got . Translation ,scaling and rotation of graphics are achieved and eliminated by counterclockwise ,clockwise calculation devia‐tion ,conformal effect is ideal .Clockwise to generate closed curve trajectory point location and record data ,convenient plane closed curve calculation of relations between objects ,Boolean calculation results are obtained .The method can build space graphics ,make color gradient effect ideal .
Rediscovery of Good-Turing estimators via Bayesian nonparametrics.
Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye
2016-03-01
The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library.
Incompressible smoothed particle hydrodynamics
International Nuclear Information System (INIS)
We present a smoothed particle hydrodynamic model for incompressible fluids. As opposed to solving a pressure Poisson equation in order to get a divergence-free velocity field, here incompressibility is achieved by requiring as a kinematic constraint that the volume of the fluid particles is constant. We use Lagrangian multipliers to enforce this restriction. These Lagrange multipliers play the role of non-thermodynamic pressures whose actual values are fixed through the kinematic restriction. We use the SHAKE methodology familiar in constrained molecular dynamics as an efficient method for finding the non-thermodynamic pressure satisfying the constraints. The model is tested for several flow configurations
B-spline active rays segmentation of microcalcifications in mammography
Energy Technology Data Exchange (ETDEWEB)
Arikidis, Nikolaos S.; Skiadopoulos, Spyros; Karahaliou, Anna; Likaki, Eleni; Panayiotakis, George; Costaridou, Lena [Department of Medical Physics, School of Medicine, University of Patras, 265 00 Patras (Greece); Department of Radiology, School of Medicine, University of Patras, 265 00 Patras (Greece); Department of Medical Physics, School of Medicine, University of Patras, 265 00 Patras (Greece)
2008-11-15
Accurate segmentation of microcalcifications in mammography is crucial for the quantification of morphologic properties by features incorporated in computer-aided diagnosis schemes. A novel segmentation method is proposed implementing active rays (polar-transformed active contours) on B-spline wavelet representation to identify microcalcification contour point estimates in a coarse-to-fine strategy at two levels of analysis. An iterative region growing method is used to delineate the final microcalcification contour curve, with pixel aggregation constrained by the microcalcification contour point estimates. A radial gradient-based method was also implemented for comparative purposes. The methods were tested on a dataset consisting of 149 mainly pleomorphic microcalcification clusters originating from 130 mammograms of the DDSM database. Segmentation accuracy of both methods was evaluated by three radiologists, based on a five-point rating scale. The radiologists' average accuracy ratings were 3.96{+-}0.77, 3.97{+-}0.80, and 3.83{+-}0.89 for the proposed method, and 2.91{+-}0.86, 2.10{+-}0.94, and 2.56{+-}0.76 for the radial gradient-based method, respectively, while the differences in accuracy ratings between the two segmentation methods were statistically significant (Wilcoxon signed-ranks test, p<0.05). The effect of the two segmentation methods in the classification of benign from malignant microcalcification clusters was also investigated. A least square minimum distance classifier was employed based on cluster features reflecting three morphological properties of individual microcalcifications (area, length, and relative contrast). Classification performance was evaluated by means of the area under ROC curve (A{sub z}). The area and length morphologic features demonstrated a statistically significant (Mann-Whitney U-test, p<0.05) higher patient-based classification performance when extracted from microcalcifications segmented by the proposed method (0
BSR: B-spline atomic R-matrix codes
Zatsarinny, Oleg
2006-02-01
BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput
Implementing Bayesian Vector Autoregressions Implementing Bayesian Vector Autoregressions
Directory of Open Access Journals (Sweden)
Richard M. Todd
1988-03-01
Full Text Available Implementing Bayesian Vector Autoregressions This paper discusses how the Bayesian approach can be used to construct a type of multivariate forecasting model known as a Bayesian vector autoregression (BVAR. In doing so, we mainly explain Doan, Littermann, and Sims (1984 propositions on how to estimate a BVAR based on a certain family of prior probability distributions. indexed by a fairly small set of hyperparameters. There is also a discussion on how to specify a BVAR and set up a BVAR database. A 4-variable model is used to iliustrate the BVAR approach.
Smooth distributions are finitely generated
Drager, Lance D; Park, Efton; Richardson, Ken
2010-01-01
A subbundle of variable dimension inside the tangent bundle of a smooth manifold is called a smooth distribution if it is the pointwise span of a family of smooth vector fields. We prove that all such distributions are finitely generated, meaning that the family may be taken to be a finite collection. Further, we show that the space of smooth sections of such distributions need not be finitely generated as a module over the smooth functions. Our results are valid in greater generality, where the tangent bundle may be replaced by an arbitrary vector bundle.
Institute of Scientific and Technical Information of China (English)
孙孝前; 尤进红
2003-01-01
In this paper we consider the estimating problem of a semiparametric regression modelling whenthe data are longitudinal. An iterative weighted partial spline least squares estimator (IWPSLSE) for the para-metric component is proposed which is more efficient than the weighted partial spline least squares estimator(WPSLSE) with weights constructed by using the within-group partial spline least squares residuals in the senseof asymptotic variance. The asymptotic normality of this IWPSLSE is established. An adaptive procedure ispresented which ensures that the iterative process stops after a finite number of iterations and produces anestimator asymptotically equivalent to the best estimator that can be obtained by using the iterative proce-dure. These results are generalizations of those in heteroscedastic linear model to the case of semiparametric regression.
Inverse Spline Interpolation for All-time Resistivity of Central-Loop TEM
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A convenient numerical calculation method (inverse spline interpolation) for all-time apparent resistivity in transient electromagnetic method (TEM) is proposed in this paper. Characteristic of early and late normalized inductive electromotive force was investigated. According to the turning point, the transient process is divided into the early phase, the turning point, and the late phase. Afterwards, apparent resistivity is obtained through inverse spline interpolation in the early and the late phases, respectively. Finally, the resistivities of the early-time and the late-time were connected together by the turning point. The result shows that the inverse spline method is feasible and the method also lays a foundation for initial model construction in the TEM automatic inversion.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Evaluation of solid–liquid interface profile during continuous casting by a spline based formalism
Indian Academy of Sciences (India)
S K Das
2001-08-01
A numerical framework has been applied which comprises of a cubic spline based collocation method to determine the solid–liquid interface profile (solidification front) during continuous casting process. The basis function chosen for the collocation algorithm to be employed in this formalism, is a cubic spline interpolation function. An iterative solution methodology has been developed to track the interface profile for copper strand of rectangular transverse section for different casting speeds. It is based on enthalpy conservation criteria at the solidification interface and the trend is found to be in good agreement with the available information in the literature although a point to point mapping of the profile is not practically realizable. The spline based collocation algorithm is found to be a reasonably efficient tool for solidification front tracking process, as a good spatial derivative approximation can be achieved incorporating simple modelling philosophy which is numerically robust and computationally cost effective.
B-Spline Finite Elements and their Efficiency in Solving Relativistic Mean Field Equations
Pöschl, W
1997-01-01
A finite element method using B-splines is presented and compared with a conventional finite element method of Lagrangian type. The efficiency of both methods has been investigated at the example of a coupled non-linear system of Dirac eigenvalue equations and inhomogeneous Klein-Gordon equations which describe a nuclear system in the framework of relativistic mean field theory. Although, FEM has been applied with great success in nuclear RMF recently, a well known problem is the appearance of spurious solutions in the spectra of the Dirac equation. The question, whether B-splines lead to a reduction of spurious solutions is analyzed. Numerical expenses, precision and behavior of convergence are compared for both methods in view of their use in large scale computation on FEM grids with more dimensions. A B-spline version of the object oriented C++ code for spherical nuclei has been used for this investigation.
Shape blending of artistic brushstroke represented by disk B-spline curves
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper presents an algorithm for automatically generating in-between frames of two artistic brushstrokes.The basic ides of the algorithm is to represent the two key frames of artistic brushstrokes in disk B-spline curves,and then make blending of their geometric intrinsic variables.Given two key frames of artistic brushstrokes,the skeleton curves can be obtained by certain skeleton-besed techniques.After disk B-spline representation of the key frames is generated,interpolation of the intrinsic variables of the initial and the target disk B-spline curves is carried out.Examples show that this method can efficiently create in-between frames of artistic brushstrokes.
Directory of Open Access Journals (Sweden)
Scott W. Keith
2014-09-01
Full Text Available This paper details the design, evaluation, and implementation of a framework for detecting and modeling nonlinearity between a binary outcome and a continuous predictor variable adjusted for covariates in complex samples. The framework provides familiar-looking parameterizations of output in terms of linear slope coefficients and odds ratios. Estimation methods focus on maximum likelihood optimization of piecewise linear free-knot splines formulated as B-splines. Correctly specifying the optimal number and positions of the knots improves the model, but is marked by computational intensity and numerical instability. Our inference methods utilize both parametric and nonparametric bootstrapping. Unlike other nonlinear modeling packages, this framework is designed to incorporate multistage survey sample designs common to nationally representative datasets. We illustrate the approach and evaluate its performance in specifying the correct number of knots under various conditions with an example using body mass index (BMI; kg/m2 and the complex multi-stage sampling design from the Third National Health and Nutrition Examination Survey to simulate binary mortality outcomes data having realistic nonlinear sample-weighted risk associations with BMI. BMI and mortality data provide a particularly apt example and area of application since BMI is commonly recorded in large health surveys with complex designs, often categorized for modeling, and nonlinearly related to mortality. When complex sample design considerations were ignored, our method was generally similar to or more accurate than two common model selection procedures, Schwarz’s Bayesian Information Criterion (BIC and Akaike’s Information Criterion (AIC, in terms of correctly selecting the correct number of knots. Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these conditions.
Bayesian tomography and integrated data analysis in fusion diagnostics
Li, Dong; Dong, Y. B.; Deng, Wei; Shi, Z. B.; Fu, B. Z.; Gao, J. M.; Wang, T. B.; Zhou, Yan; Liu, Yi; Yang, Q. W.; Duan, X. R.
2016-11-01
In this article, a Bayesian tomography method using non-stationary Gaussian process for a prior has been introduced. The Bayesian formalism allows quantities which bear uncertainty to be expressed in the probabilistic form so that the uncertainty of a final solution can be fully resolved from the confidence interval of a posterior probability. Moreover, a consistency check of that solution can be performed by checking whether the misfits between predicted and measured data are reasonably within an assumed data error. In particular, the accuracy of reconstructions is significantly improved by using the non-stationary Gaussian process that can adapt to the varying smoothness of emission distribution. The implementation of this method to a soft X-ray diagnostics on HL-2A has been used to explore relevant physics in equilibrium and MHD instability modes. This project is carried out within a large size inference framework, aiming at an integrated analysis of heterogeneous diagnostics.
Dynamic Bayesian diffusion estimation
Dedecius, K
2012-01-01
The rapidly increasing complexity of (mainly wireless) ad-hoc networks stresses the need of reliable distributed estimation of several variables of interest. The widely used centralized approach, in which the network nodes communicate their data with a single specialized point, suffers from high communication overheads and represents a potentially dangerous concept with a single point of failure needing special treatment. This paper's aim is to contribute to another quite recent method called diffusion estimation. By decentralizing the operating environment, the network nodes communicate just within a close neighbourhood. We adopt the Bayesian framework to modelling and estimation, which, unlike the traditional approaches, abstracts from a particular model case. This leads to a very scalable and universal method, applicable to a wide class of different models. A particularly interesting case - the Gaussian regressive model - is derived as an example.
Book review: Bayesian analysis for population ecology
Link, William A.
2011-01-01
Brian Dennis described the field of ecology as “fertile, uncolonized ground for Bayesian ideas.” He continued: “The Bayesian propagule has arrived at the shore. Ecologists need to think long and hard about the consequences of a Bayesian ecology. The Bayesian outlook is a successful competitor, but is it a weed? I think so.” (Dennis 2004)
A compressed primal-dual method for generating bivariate cubic L1 splines
Wang, Yong; Fang, Shu-Cherng; Lavery, John E.
2007-04-01
In this paper, we develop a compressed version of the primal-dual interior point method for generating bivariate cubic L1 splines. Discretization of the underlying optimization model, which is a nonsmooth convex programming problem, leads to an overdetermined linear system that can be handled by interior point methods. Taking advantage of the special matrix structure of the cubic L1 spline problem, we design a compressed primal-dual interior point algorithm. Computational experiments indicate that this compressed primal-dual method is robust and is much faster than the ordinary (uncompressed) primal-dual interior point algorithm.
Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation
Directory of Open Access Journals (Sweden)
Ahmad Bilfarsah
2005-04-01
Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.
Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations
Energy Technology Data Exchange (ETDEWEB)
Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)
1996-12-31
In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).
Splines and the Galerkin method for solving the integral equations of scattering theory
Brannigan, M.; Eyre, D.
1983-06-01
This paper investigates the Galerkin method with cubic B-spline approximants to solve singular integral equations that arise in scattering theory. We stress the relationship between the Galerkin and collocation methods.The error bound for cubic spline approximates has a convergence rate of O(h4), where h is the mesh spacing. We test the utility of the Galerkin method by solving both two- and three-body problems. We demonstrate, by solving the Amado-Lovelace equation for a system of three identical bosons, that our numerical treatment of the scattering problem is both efficient and accurate for small linear systems.
Longitudinal Cavity Mode Referenced Spline Tuning for Widely Tunable MG-Y Branch Semiconductor Laser
Directory of Open Access Journals (Sweden)
H. Heininger
2014-04-01
Full Text Available This paper presents a novel method for wavelength-continuous tuning of a MG-Y-Branch Laser that possesses an intrinsic self-calibration capability. The method utilizes the measured characteristic output power pattern caused by the internal longitudinal cavity modes of the laser device to calibrate a set of cubical spline curves. The spline curves are then used to generate the tuning currents for the two reflector sections and the phase section of the laser from an intermediate tuning control parameter. A calibration function maps the desired laser wavelength to the intermediate tuning parameter, thus enabling continuous tuning with high accuracy.
Study of Quintic Spline Interpolation and Generated Velocity Profile for High Speed Machining
Institute of Scientific and Technical Information of China (English)
ZHENG Jinxing; ZHANG Mingjun; MENG Qingxin
2006-01-01
Modern high speed machining (HSM) machine tools often operates at high speed and high feedrate with high accelerations, in order to deliver the rapid feed motion. This paper presents an interpolation algorithm to generate continuous quintic spline toolpaths, with a constant travel increment at each step, while the smoother accelerations and jerks of two-order curve are obtained. Then an approach for reducing the feedrate fluctuation in high speed spline interpolation is presented. The presented approach has been validated to quickly, reliably and effective with the simulation.
A weighted extended B-spline solver for bending and buckling of stiffened plates
Verschaeve, Joris C G
2015-01-01
The weighted extended B-spline method [Hoellig (2003)] is applied to bending and buckling problems of plates with different shapes and stiffener arrangements. The discrete equations are obtained from the energy contributions of the different components constituting the system by means of the Rayleigh-Ritz approach. The pre-buckling or plane stress is computed by means of Airy's stress function. A boundary data extension algorithm for the weighted extended B-spline method is derived in order to solve for inhomogeneous Dirichlet boundary conditions. A series of benchmark tests is performed touching various aspects influencing the accuracy of the method.
Smooth quantum gravity: Exotic smoothness and Quantum gravity
Asselmeyer-Maluga, Torsten
2016-01-01
Over the last two decades, many unexpected relations between exotic smoothness, e.g. exotic $\\mathbb{R}^{4}$, and quantum field theory were found. Some of these relations are rooted in a relation to superstring theory and quantum gravity. Therefore one would expect that exotic smoothness is directly related to the quantization of general relativity. In this article we will support this conjecture and develop a new approach to quantum gravity called \\emph{smooth quantum gravity} by using smoot...
Current trends in Bayesian methodology with applications
Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia
2015-01-01
Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on
Michel, Volker
2013-01-01
Lectures on Constructive Approximation: Fourier, Spline, and Wavelet Methods on the Real Line, the Sphere, and the Ball focuses on spherical problems as they occur in the geosciences and medical imaging. It comprises the author’s lectures on classical approximation methods based on orthogonal polynomials and selected modern tools such as splines and wavelets. Methods for approximating functions on the real line are treated first, as they provide the foundations for the methods on the sphere and the ball and are useful for the analysis of time-dependent (spherical) problems. The author then examines the transfer of these spherical methods to problems on the ball, such as the modeling of the Earth’s or the brain’s interior. Specific topics covered include: * the advantages and disadvantages of Fourier, spline, and wavelet methods * theory and numerics of orthogonal polynomials on intervals, spheres, and balls * cubic splines and splines based on reproducing kernels * multiresolution analysis using wavelet...
Irregular-Time Bayesian Networks
Ramati, Michael
2012-01-01
In many fields observations are performed irregularly along time, due to either measurement limitations or lack of a constant immanent rate. While discrete-time Markov models (as Dynamic Bayesian Networks) introduce either inefficient computation or an information loss to reasoning about such processes, continuous-time Markov models assume either a discrete state space (as Continuous-Time Bayesian Networks), or a flat continuous state space (as stochastic dif- ferential equations). To address these problems, we present a new modeling class called Irregular-Time Bayesian Networks (ITBNs), generalizing Dynamic Bayesian Networks, allowing substantially more compact representations, and increasing the expressivity of the temporal dynamics. In addition, a globally optimal solution is guaranteed when learning temporal systems, provided that they are fully observed at the same irregularly spaced time-points, and a semiparametric subclass of ITBNs is introduced to allow further adaptation to the irregular nature of t...
A Digital-Discrete Method For Smooth-Continuous Data Reconstruction
Chen, Li
2010-01-01
A systematic digital-discrete method for obtaining continuous functions with smoothness to a certain order (C^(n)) from sample data is designed. This method is based on gradually varied functions and the classical finite difference method. This new method has been applied to real groundwater data and the results have validated the method. This method is independent from existing popular methods such as the cubic spline method and the finite element method. The new digital-discrete method has considerable advantages for a large number of real data applications. This digital method also differs from other classical discrete methods that usually use triangulations. This method can potentially be used to obtain smooth functions such as polynomials through its derivatives f^(k) and the solution for partial differential equations such as harmonic and other important equations.
Neuronanatomy, neurology and Bayesian networks
Bielza Lozoya, Maria Concepcion
2014-01-01
Bayesian networks are data mining models with clear semantics and a sound theoretical foundation. In this keynote talk we will pinpoint a number of neuroscience problems that can be addressed using Bayesian networks. In neuroanatomy, we will show computer simulation models of dendritic trees and classification of neuron types, both based on morphological features. In neurology, we will present the search for genetic biomarkers in Alzheimer's disease and the prediction of health-related qualit...
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Application of Cubic Box Spline Wavelets in the Analysis of Signal Singularities
Directory of Open Access Journals (Sweden)
Rakowski Waldemar
2015-12-01
Full Text Available In the subject literature, wavelets such as the Mexican hat (the second derivative of a Gaussian or the quadratic box spline are commonly used for the task of singularity detection. The disadvantage of the Mexican hat, however, is its unlimited support; the disadvantage of the quadratic box spline is a phase shift introduced by the wavelet, making it difficult to locate singular points. The paper deals with the construction and properties of wavelets in the form of cubic box splines which have compact and short support and which do not introduce a phase shift. The digital filters associated with cubic box wavelets that are applied in implementing the discrete dyadic wavelet transform are defined. The filters and the algorithme à trous of the discrete dyadic wavelet transform are used in detecting signal singularities and in calculating the measures of signal singularities in the form of a Lipschitz exponent. The article presents examples illustrating the use of cubic box spline wavelets in the analysis of signal singularities.
Recovery of Graded Index Profile of Planar Waveguide by Cubic Spline Function
Institute of Scientific and Technical Information of China (English)
YANG Yong; CHEN Xian-Feng; LIAO Wei-Jun; XIA Yu-Xing
2007-01-01
A method is proposed to recover the refractive index profile of graded waveguide from the effective indices by a cubic spline interpolation function. Numerical analysis of several typical index distributions show that the refractive index profile can be reconstructed closely to its exact profile by the presented interpolation model.
A Mixed Basis Density Functional Approach for Low Dimensional Systems with B-splines
Ren, Chung-Yuan; Chang, Yia-Chung
2014-01-01
A mixed basis approach based on density functional theory is employed for low dimensional systems. The basis functions are taken to be plane waves for the periodic direction multiplied by B-spline polynomials in the non-periodic direction. B-splines have the following advantages:(1) the associated matrix elements are sparse, (2) B-splines possess a superior treatment of derivatives, (3) B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With this mixed basis set we can directly calculate the total energy of the system instead of using the conventional supercell model with a slab sandwiched between vacuum regions. A generalized Lanczos-Krylov iterative method is implemented for the diagonalization of the Hamiltonian matrix. To demonstrate the present approach, we apply it to study the C(001)-(2x1) surface with the norm-conserving pseudopotential, the n-type delta-doped graphene, and graphene nanoribbon with Vanderbilt...
Calculations of Electron Structure of Endohedrally Confined Helium Atom with B-Spline Type Functions
Institute of Scientific and Technical Information of China (English)
QIAO HaoXue; SHI TingYun; LI BaiWen
2002-01-01
The B-spline basis set method is used to study the properties of helium confined endohedrally at thegeometrical centre of a fullerene. The boundary conditions of the wavefunctions can be simply satisfied with thismethod. From our results, the phenomenon of "mirror collapse" is found in the case of confining helium. The interestingbehaviors of confining helium are also discussed.
Least square fitting of low resolution gamma ray spectra with cubic B-spline basis functions
Institute of Scientific and Technical Information of China (English)
ZHU Meng-Hua; LIU Liang-Gang; QI Dong-Xu; YOU Zhong; XU Ao-Ao
2009-01-01
In this paper,the least square fitting method with the cubic B-spline basis hmctioas is derived to reduce the influence of statistical fluctuations in the gamma ray spectra.The derived procedure is simple and automatic.The results show that this method is better than the convolution method with a sufficient reduction of statistical fluctuation.
Constructing iterative non-uniform B-spline curve and surface to fit data points
Institute of Scientific and Technical Information of China (English)
LIN Hongwei; WANG Guojin; DONG Chenshi
2004-01-01
In this paper, based on the idea of profit and loss modification, we present the iterative non-uniform B-spline curve and surface to settle a key problem in computer aided geometric design and reverse engineering, that is, constructing the curve (surface)fitting (interpolating) a given ordered point set without solving a linear system. We start with a piece of initial non-uniform B-spline curve (surface) which takes the given point set as its control point set. Then by adjusting its control points gradually with iterative formula,we can get a group of non-uniform B-spline curves (surfaces) with gradually higher precision. In this paper, using modern matrix theory, we strictly prove that the limit curve (surface) of the iteration interpolates the given point set. The non-uniform B-spline curves (surfaces) generated with the iteration have many advantages, such as satisfying the NURBS standard, having explicit expression, gaining locality, and convexity preserving,etc.
Numerical solution of functional integral equations by using B-splines
Directory of Open Access Journals (Sweden)
Reza Firouzdor
2014-05-01
Full Text Available This paper describes an approximating solution, based on Lagrange interpolation and spline functions, to treat functional integral equations of Fredholm type and Volterra type. This method can be extended to functional dierential and integro-dierential equations. For showing eciency of the method we give some numerical examples.
Application of Cubic Spline in the Implementation of Braces for the Case of a Child
Directory of Open Access Journals (Sweden)
Azmin Sham Rambely
2012-01-01
Full Text Available Problem statement: Orthodontic teeth movement is influenced by the characteristics of the applied force, including its magnitude and direction which normally based on the shape of ellipsoid, parabolic and U-shape that are symmetry. However, this will affect the movement of the whole set of tooth. Approach: This study intends to compare the form of general teeth with another method called cubic spline to get a minimum error in presenting the general form of teeth. Cubic spline method is applied in a mathematical model of a childâs teeth, which is produced through resignation of orthodontic wires. It is also meant to create a clear view towards the true nature of orthodontic wires. Results: Based on mathematical characteristics in the spline and the real data of a teethâs model, cubic spline shows to be very useful in reflecting the shape of a curve because the dots chosen are not totally symmetry. Conclusion/Recommendation: Therefore, symmetrical curve can be produced in teethâs shape which is basically asymmetry.
Dale Poirier
2008-01-01
This paper provides Bayesian rationalizations for White’s heteroskedastic consistent (HC) covariance estimator and various modifications of it. An informed Bayesian bootstrap provides the statistical framework.
Bayesian modeling and significant features exploration in wavelet power spectra
Directory of Open Access Journals (Sweden)
D. V. Divine
2007-01-01
Full Text Available This study proposes and justifies a Bayesian approach to modeling wavelet coefficients and finding statistically significant features in wavelet power spectra. The approach utilizes ideas elaborated in scale-space smoothing methods and wavelet data analysis. We treat each scale of the discrete wavelet decomposition as a sequence of independent random variables and then apply Bayes' rule for constructing the posterior distribution of the smoothed wavelet coefficients. Samples drawn from the posterior are subsequently used for finding the estimate of the true wavelet spectrum at each scale. The method offers two different significance testing procedures for wavelet spectra. A traditional approach assesses the statistical significance against a red noise background. The second procedure tests for homoscedasticity of the wavelet power assessing whether the spectrum derivative significantly differs from zero at each particular point of the spectrum. Case studies with simulated data and climatic time-series prove the method to be a potentially useful tool in data analysis.
Dynamic Batch Bayesian Optimization
Azimi, Javad; Fern, Xiaoli
2011-01-01
Bayesian optimization (BO) algorithms try to optimize an unknown function that is expensive to evaluate using minimum number of evaluations/experiments. Most of the proposed algorithms in BO are sequential, where only one experiment is selected at each iteration. This method can be time inefficient when each experiment takes a long time and more than one experiment can be ran concurrently. On the other hand, requesting a fix-sized batch of experiments at each iteration causes performance inefficiency in BO compared to the sequential policies. In this paper, we present an algorithm that asks a batch of experiments at each time step t where the batch size p_t is dynamically determined in each step. Our algorithm is based on the observation that the sequence of experiments selected by the sequential policy can sometimes be almost independent from each other. Our algorithm identifies such scenarios and request those experiments at the same time without degrading the performance. We evaluate our proposed method us...
Nonparametric Bayesian Classification
Coram, M A
2002-01-01
A Bayesian approach to the classification problem is proposed in which random partitions play a central role. It is argued that the partitioning approach has the capacity to take advantage of a variety of large-scale spatial structures, if they are present in the unknown regression function $f_0$. An idealized one-dimensional problem is considered in detail. The proposed nonparametric prior uses random split points to partition the unit interval into a random number of pieces. This prior is found to provide a consistent estimate of the regression function in the $\\L^p$ topology, for any $1 \\leq p < \\infty$, and for arbitrary measurable $f_0:[0,1] \\rightarrow [0,1]$. A Markov chain Monte Carlo (MCMC) implementation is outlined and analyzed. Simulation experiments are conducted to show that the proposed estimate compares favorably with a variety of conventional estimators. A striking resemblance between the posterior mean estimate and the bagged CART estimate is noted and discussed. For higher dimensions, a ...
Fatigue crack growth monitoring of idealized gearbox spline component using acoustic emission
Zhang, Lu; Ozevin, Didem; Hardman, William; Kessler, Seth; Timmons, Alan
2016-04-01
The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. The acoustic emission (AE) method is a direct way of detecting active flaws; however, the method suffers from the influence of background noise and location/sensor based pattern recognition method. It is important to identify the source mechanism and adapt it to different test conditions and sensors. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method in a laboratory environment. The test sample has the major details of the spline component on a flattened geometry. The AE data is continuously collected together with strain gauges strategically positions on the structure. The fatigue test characteristics are 4 Hz frequency and 0.1 as the ratio of minimum to maximum loading in tensile regime. It is observed that there are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The frequency spectra of continuous emissions and burst emissions are compared to understand the difference of sudden crack growth and gradual crack growth. The predicted crack growth rate is compared with the AE data using the cumulative AE events at the notch tip. The source mechanism of sudden crack growth is obtained solving the inverse mathematical problem from output signal to input signal. The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method The AE data is continuously collected together with strain gauges. There are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The source mechanism of
Astrophysical Smooth Particle Hydrodynamics
Rosswog, Stephan
2009-01-01
In this review the basic principles of smooth particle hydrodynamics (SPH) are outlined in a pedagogical fashion. To start, a basic set of SPH equations that is used in many codes throughout the astrophysics community is derived explicitly. Much of SPH's success relies on its excellent conservation properties and therefore the numerical conservation of physical invariants receives much attention throughout this review. The self-consistent derivation of the SPH equations from the Lagrangian of an ideal fluid is the common theme of the remainder of the text. Such a variational approach is applied to derive a modern SPH version of Newtonian hydrodynamics. It accounts for gradients in the local resolution lengths which result in corrective, so-called "grad-h-terms". This strategy naturally carries over to the special-relativistic case for which we derive the corresponding grad-h set of equations. This approach is further generalized to the case of a fluid that evolves on a curved, but fixed background space-time.
Bayesian seismic AVO inversion
Energy Technology Data Exchange (ETDEWEB)
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
Attention in a bayesian framework
DEFF Research Database (Denmark)
Whiteley, Louise Emma; Sahani, Maneesh
2012-01-01
The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models...... of perception, and use this observation to frame a new computational account of the need for, and action of, attention - unifying diverse attentional phenomena in a way that goes beyond previous inferential, probabilistic and Bayesian models. Attentional effects are most evident in cluttered environments......, and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...
Probability biases as Bayesian inference
Directory of Open Access Journals (Sweden)
Andre; C. R. Martins
2006-11-01
Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.
Bayesian Methods and Universal Darwinism
Campbell, John
2010-01-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...
Smooth muscle strips for intestinal tissue engineering.
Directory of Open Access Journals (Sweden)
Christopher M Walthers
Full Text Available Functionally contracting smooth muscle is an essential part of the engineered intestine that has not been replicated in vitro. The purpose of this study is to produce contracting smooth muscle in culture by maintaining the native smooth muscle organization. We employed intact smooth muscle strips and compared them to dissociated smooth muscle cells in culture for 14 days. Cells isolated by enzymatic digestion quickly lost maturity markers for smooth muscle cells and contained few enteric neural and glial cells. Cultured smooth muscle strips exhibited periodic contraction and maintained neural and glial markers. Smooth muscle strips cultured for 14 days also exhibited regular fluctuation of intracellular calcium, whereas cultured smooth muscle cells did not. After implantation in omentum for 14 days on polycaprolactone scaffolds, smooth muscle strip constructs expressed high levels of smooth muscle maturity markers as well as enteric neural and glial cells. Intact smooth muscle strips may be a useful component for engineered intestinal smooth muscle.
Bayesian test and Kuhn's paradigm
Institute of Scientific and Technical Information of China (English)
Chen Xiaoping
2006-01-01
Kuhn's theory of paradigm reveals a pattern of scientific progress,in which normal science alternates with scientific revolution.But Kuhn underrated too much the function of scientific test in his pattern,because he focuses all his attention on the hypothetico-deductive schema instead of Bayesian schema.This paper employs Bayesian schema to re-examine Kuhn's theory of paradigm,to uncover its logical and rational components,and to illustrate the tensional structure of logic and belief,rationality and irrationality,in the process of scientific revolution.
Perception, illusions and Bayesian inference.
Nour, Matthew M; Nour, Joseph M
2015-01-01
Descriptive psychopathology makes a distinction between veridical perception and illusory perception. In both cases a perception is tied to a sensory stimulus, but in illusions the perception is of a false object. This article re-examines this distinction in light of new work in theoretical and computational neurobiology, which views all perception as a form of Bayesian statistical inference that combines sensory signals with prior expectations. Bayesian perceptual inference can solve the 'inverse optics' problem of veridical perception and provides a biologically plausible account of a number of illusory phenomena, suggesting that veridical and illusory perceptions are generated by precisely the same inferential mechanisms.
3D Bayesian contextual classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
2000-01-01
We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....
Bayesian methods for proteomic biomarker development
Directory of Open Access Journals (Sweden)
Belinda Hernández
2015-12-01
In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.
Smooth analysis in Banach spaces
Hájek, Petr
2014-01-01
This bookis aboutthe subject of higher smoothness in separable real Banach spaces.It brings together several angles of view on polynomials, both in finite and infinite setting.Also a rather thorough and systematic view of the more recent results, and the authors work is given. The book revolves around two main broad questions: What is the best smoothness of a given Banach space, and its structural consequences? How large is a supply of smooth functions in the sense of approximating continuous functions in the uniform topology, i.e. how does the Stone-Weierstrass theorem generalize into in
Virtual Vector Machine for Bayesian Online Classification
Minka, Thomas P; Yuan,; Qi,
2012-01-01
In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information contained in the preceding data stream by a Gaussian distribution over the classi cation weights plus a constant number of virtual data points. The virtual data points are designed to add extra non-Gaussian information about the classi cation weights. To maintain the constant number of virtual points, the virtual vector machine adds the current real data point into the virtual point set, merges two most similar virtual points into a new virtual point or deletes a virtual point that is far from the decision boundary. The info...
Bayesian exploration of recent Chilean earthquakes
Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Liang, Cunren; Agram, Piyush; Owen, Susan; Ortega, Francisco; Minson, Sarah
2016-04-01
The South-American subduction zone is an exceptional natural laboratory for investigating the behavior of large faults over the earthquake cycle. It is also a playground to develop novel modeling techniques combining different datasets. Coastal Chile was impacted by two major earthquakes in the last two years: the 2015 M 8.3 Illapel earthquake in central Chile and the 2014 M 8.1 Iquique earthquake that ruptured the central portion of the 1877 seismic gap in northern Chile. To gain better understanding of the distribution of co-seismic slip for those two earthquakes, we derive joint kinematic finite fault models using a combination of static GPS offsets, radar interferograms, tsunami measurements, high-rate GPS waveforms and strong motion data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the Green's functions. The results reveal different rupture behaviors for the 2014 Iquique and 2015 Illapel earthquakes. The 2014 Iquique earthquake involved a sharp slip zone and did not rupture to the trench. The 2015 Illapel earthquake nucleated close to the coast and propagated toward the trench with significant slip apparently reaching the trench or at least very close to the trench. At the inherent resolution of our models, we also present the relationship of co-seismic models to the spatial distribution of foreshocks, aftershocks and fault coupling models.
Bayesian variable order Markov models: Towards Bayesian predictive state representations
C. Dimitrakakis
2009-01-01
We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more st
Bayesian networks and food security - An introduction
Stein, A.
2004-01-01
This paper gives an introduction to Bayesian networks. Networks are defined and put into a Bayesian context. Directed acyclical graphs play a crucial role here. Two simple examples from food security are addressed. Possible uses of Bayesian networks for implementation and further use in decision sup
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
A Bayesian Nonparametric Approach to Test Equating
Karabatsos, George; Walker, Stephen G.
2009-01-01
A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…
Institute of Scientific and Technical Information of China (English)
Elisabetta Santi; M.G. Cimoroni
2002-01-01
In this paper, product formulas based on projector-splines for the numerical evaluation of 2-D CPV integrals are proposed. Convergence results are proved, numerical examples and comparisons are given.
SMOOTHING BY CONVEX QUADRATIC PROGRAMMING
Institute of Scientific and Technical Information of China (English)
Bing-sheng He; Yu-mei Wang
2005-01-01
In this paper, we study the relaxed smoothing problems with general closed convex constraints. It is pointed out that such problems can be converted to a convex quadratic minimization problem for which there are good programs in software libraries.
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Energy Technology Data Exchange (ETDEWEB)
Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu [Iowa State University, Center for Nondestructive Evaluation, 1915 Scholl Road, Ames, IA 50011 (United States)
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
Ship hull plate processing surface fairing with constraints based on B-spline
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The problem of ship hull plate processing surface fairing with constraints based on B-spline is solved in this paper. The algorithm for B-spline curve fairing with constraints is one of the most common methods in plane curve fairing. The algorithm can be applied to global and local curve fairing. It can constrain the perturbation range of the control points and the shape variation of the curve, and get a better fairing result in plane curves. In this paper, a new fairing algorithm with constraints for curves and surfaces in space is presented. Then this method is applied to the experiments of ship hull plate processing surface. Finally numerical results are obtained to show the efficiency of this method.
A new wavelet-based thin plate element using B-spline wavelet on the interval
Jiawei, Xiang; Xuefeng, Chen; Zhengjia, He; Yinghong, Zhang
2008-01-01
By interacting and synchronizing wavelet theory in mathematics and variational principle in finite element method, a class of wavelet-based plate element is constructed. In the construction of wavelet-based plate element, the element displacement field represented by the coefficients of wavelet expansions in wavelet space is transformed into the physical degree of freedoms in finite element space via the corresponding two-dimensional C1 type transformation matrix. Then, based on the associated generalized function of potential energy of thin plate bending and vibration problems, the scaling functions of B-spline wavelet on the interval (BSWI) at different scale are employed directly to form the multi-scale finite element approximation basis so as to construct BSWI plate element via variational principle. BSWI plate element combines the accuracy of B-spline functions approximation and various wavelet-based elements for structural analysis. Some static and dynamic numerical examples are studied to demonstrate the performances of the present element.
The Design and Characterization of Wideband Spline-profiled Feedhorns for Advanced Actpol
Simon, Sara M.; Austermann, Jason; Beall, James A.; Choi, Steve K.; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Henderson, Shawn W.; Hills, Felicity B.; Ho, Shuay-Pwu Patty; Hubmayr, Johannes; Josaitis, Alec; Koopman, Brian J.; McMahon, Jeff J.; Nati, Federico; Newburgh, Laura; Niemack, Michael D.; Salatino, Maria; Schillaci, Alessandro; Wollack, Edward J.
2016-01-01
Advanced ACTPol (AdvACT) is an upgraded camera for the Atacama Cosmology Telescope (ACT) that will measure the cosmic microwave background in temperature and polarization over a wide range of angular scales and five frequency bands from 28-230 GHz. AdvACT will employ four arrays of feedhorn-coupled, polarization- sensitive multichroic detectors. To accommodate the higher pixel packing densities necessary to achieve Ad- vACTs sensitivity goals, we have developed and optimized wideband spline-profiled feedhorns for the AdvACT multichroic arrays that maximize coupling efficiency while carefully controlling polarization systematics. We present the design, fabrication, and testing of wideband spline-profiled feedhorns for the multichroic arrays of AdvACT.
Directory of Open Access Journals (Sweden)
Wang Ping
2016-01-01
Full Text Available A new cold rotary forging technology of the internal helical involute spline was presented based on an analysis of the structure of automotive starter guide cylinder. 3D rigid-plastic finite element model was employed. Billet deformation, Billet equivalent stress and forming load were investigated under the DEFORM 3D software environment, then the forming process parameters were applied in the forming trials, and the simulation results are conformed with the experimental results. The validity of 3D finite element simulation model has been verified. The research results show that the proposed cold rotary forging technology can be efficient in handling of the forming manufacturing problems of automobile starter guide cylinder with internal helical involute spline.
River Flow Lane Detection and Kalman Filtering-Based B-Spline Lane Tracking
Directory of Open Access Journals (Sweden)
King Hann Lim
2012-01-01
Full Text Available A novel lane detection technique using adaptive line segment and river flow method is proposed in this paper to estimate driving lane edges. A Kalman filtering-based B-spline tracking model is also presented to quickly predict lane boundaries in consecutive frames. Firstly, sky region and road shadows are removed by applying a regional dividing method and road region analysis, respectively. Next, the change of lane orientation is monitored in order to define an adaptive line segment separating the region into near and far fields. In the near field, a 1D Hough transform is used to approximate a pair of lane boundaries. Subsequently, river flow method is applied to obtain lane curvature in the far field. Once the lane boundaries are detected, a B-spline mathematical model is updated using a Kalman filter to continuously track the road edges. Simulation results show that the proposed lane detection and tracking method has good performance with low complexity.
A Numerical Method Based on Daubechies Wavelet Basis and B-Spline Patches for Elasticity Problems
Directory of Open Access Journals (Sweden)
Yanan Liu
2016-01-01
Full Text Available The Daubechies (DB wavelets are used for solving 2D plane elasticity problems. In order to improve the accuracy and stability in computation, the DB wavelet scaling functions in 0,+∞ comprising boundary scaling functions are chosen as basis functions for approximation. The B-spline patches used in isogeometry analysis method are constructed to describe the problem domain. Through the isoparametric analysis approach, the function approximation and relevant computation based on DB wavelet functions are implemented on B-spline patches. This work makes an attempt to break the limitation that problems only can be discretized on uniform grids in the traditional wavelet numerical method. Numerical examples of 2D elasticity problems illustrate that this kind of analysis method is effective and stable.
Intensity Conserving Spline Interpolation (ICSI): A New Tool for Spectroscopic Analysis
Klimchuk, James A; Tripathi, Durgesh
2015-01-01
The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure called Intensity Conserving Spline Interpolation (ICSI) that corrects for this effect. As its name implies, it conserves the observed intensity within each wavelength...
A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics
Energy Technology Data Exchange (ETDEWEB)
Johnson, Richard Wayne; Landon, Mark Dee
1999-07-01
The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.
A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics
Energy Technology Data Exchange (ETDEWEB)
M. D. Landon; R. W. Johnson
1999-07-01
The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve complex curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.
Shazalina Mat Zin; Ahmad Abd. Majid; Ahmad Izani Md. Ismail; Muhammad Abbas
2014-01-01
The generalized nonlinear Klien-Gordon equation is important in quantum mechanics and related fields. In this paper, a semi-implicit approach based on hybrid cubic B-spline is presented for the approximate solution of the nonlinear Klien-Gordon equation. The usual finite difference approach is used to discretize the time derivative while hybrid cubic B-spline is applied as an interpolating function in the space dimension. The results of applications to several test problems indicate good agre...
Mahapatra, Pravas R; Makkapati, Vishnu V
2005-01-01
Enhancements are carried out to a contour-based method for extreme compression of weather radar reflectivity data for efficient storage and transmission over low-bandwidth data links. In particular, a new method of systematically adjusting the control points to obtain better reconstruction of the contours using B-Spline interpolation is presented. Further, bit-level manipulations to achieve higher compression ratios are investigated. The efficacy of these enhancements is quantitatively eva...
Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence
Ait-Haddou, Rachid
2015-06-19
We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention of any numerical solver and the rule is optimal, that is, it requires minimal number of nodes. Numerical experiments validating the theoretical results and the error estimates of the quadrature rules are also presented.
Regression spline bivariate probit models: a practical approach to testing for exogeneity
Marra, G.; Radice, Rosalba; Filippou, P
2015-01-01
Bivariate probit models can deal with a problem usually known as endogeneity. This issue is likely to arise in observational studies when confounders are unobserved. We are concerned with testing the hypothesis of exogeneity (or absence of endogeneity) when using regression spline recursive and sample selection bivariate probit models. Likelihood ratio and gradient tests are discussed in this context and their empirical properties investigated and compared with those of the Lagrange multiplie...
Analysis of myocardial motion using generalized spline models and tagged magnetic resonance images
Chen, Fang; Rose, Stephen E.; Wilson, Stephen J.; Veidt, Martin; Bennett, Cameron J.; Doddrell, David M.
2000-06-01
Heart wall motion abnormalities are the very sensitive indicators of common heart diseases, such as myocardial infarction and ischemia. Regional strain analysis is especially important in diagnosing local abnormalities and mechanical changes in the myocardium. In this work, we present a complete method for the analysis of cardiac motion and the evaluation of regional strain in the left ventricular wall. The method is based on the generalized spline models and tagged magnetic resonance images (MRI) of the left ventricle. The whole method combines dynamical tracking of tag deformation, simulating cardiac movement and accurately computing the regional strain distribution. More specifically, the analysis of cardiac motion is performed in three stages. Firstly, material points within the myocardium are tracked over time using a semi-automated snake-based tag tracking algorithm developed for this purpose. This procedure is repeated in three orthogonal axes so as to generate a set of one-dimensional sample measurements of the displacement field. The 3D-displacement field is then reconstructed from this sample set by using a generalized vector spline model. The spline reconstruction of the displacement field is explicitly expressed as a linear combination of a spline kernel function associated with each sample point and a polynomial term. Finally, the strain tensor (linear or nonlinear) with three direct components and three shear components is calculated by applying a differential operator directly to the displacement function. The proposed method is computationally effective and easy to perform on tagged MR images. The preliminary study has shown potential advantages of using this method for the analysis of myocardial motion and the quantification of regional strain.
An efficient active B-spline/nurbs model for virtual sculpting
Moore, Patricia
2013-01-01
This thesis presents an Efficient Active B-Spline/Nurbs Model for Virtual Sculpting. In spite of the on-going rapid development of computer graphics and computer-aided design tools, 3D graphics designers still rely on non-intuitive modelling procedures for the creation and manipulation of freeform virtual content. The ’Virtual Sculpting' paradigm is a well-established mechanism for shielding designers from the complex mathematics that underpin freeform shape design. The premise is to emulate ...
The Numerical Approach to the Fisher's Equation via Trigonometric Cubic B-spline Collocation Method
Ersoy, Ozlem; Dag, Idris
2016-01-01
In this study, we set up a numerical technique to get approximate solutions of Fisher's equation which is one of the most important model equation in population biology. We integrate the equation fully by using combination of the trigonometric cubic B-spline functions for space variable and Crank-Nicolson for the time integration. Numerical results have been presented to show the accuracy of the current algorithm. We have seen that the proposed technique is a good alternative to some existing...
A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression
Li, Chin-Shang; Tu, Wanzhu
2007-01-01
In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the pot...
Quadratic spline collocation and parareal deferred correction method for parabolic PDEs
Liu, Jun; Wang, Yan; Li, Rongjian
2016-06-01
In this paper, we consider a linear parabolic PDE, and use optimal quadratic spline collocation (QSC) methods for the space discretization, proceed the parareal technique on the time domain. Meanwhile, deferred correction technique is used to improve the accuracy during the iterations. The error estimation is presented and the stability is analyzed. Numerical experiments, which is carried out on a parallel computer with 40 CPUs, are attached to exhibit the effectiveness of the hybrid algorithm.
Bayesian Classification of Image Structures
DEFF Research Database (Denmark)
Goswami, Dibyendu; Kalkan, Sinan; Krüger, Norbert
2009-01-01
In this paper, we describe work on Bayesian classi ers for distinguishing between homogeneous structures, textures, edges and junctions. We build semi-local classiers from hand-labeled images to distinguish between these four different kinds of structures based on the concept of intrinsic dimensi...
Bayesian Agglomerative Clustering with Coalescents
Teh, Yee Whye; Daumé III, Hal; Roy, Daniel
2009-01-01
We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman's coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over others, and demonstrate our approach in document clustering and phylolinguistics.
Bayesian NL interpretation and learning
H. Zeevat
2011-01-01
Everyday natural language communication is normally successful, even though contemporary computational linguistics has shown that NL is characterised by very high degree of ambiguity and the results of stochastic methods are not good enough to explain the high success rate. Bayesian natural language
Differentiated Bayesian Conjoint Choice Designs
Z. Sándor (Zsolt); M. Wedel (Michel)
2003-01-01
textabstractPrevious conjoint choice design construction procedures have produced a single design that is administered to all subjects. This paper proposes to construct a limited set of different designs. The designs are constructed in a Bayesian fashion, taking into account prior uncertainty about
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...
Bayesian stable isotope mixing models
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
2013-01-01
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...
3-D contextual Bayesian classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
In this paper we will consider extensions of a series of Bayesian 2-D contextual classification pocedures proposed by Owen (1984) Hjort & Mohn (1984) and Welch & Salter (1971) and Haslett (1985) to 3 spatial dimensions. It is evident that compared to classical pixelwise classification further...
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed...
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis Linda
2006-01-01
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...
Bayesian Analysis of Experimental Data
Directory of Open Access Journals (Sweden)
Lalmohan Bhar
2013-10-01
Full Text Available Analysis of experimental data from Bayesian point of view has been considered. Appropriate methodology has been developed for application into designed experiments. Normal-Gamma distribution has been considered for prior distribution. Developed methodology has been applied to real experimental data taken from long term fertilizer experiments.
A mixed basis density functional approach for one-dimensional systems with B-splines
Ren, Chung-Yuan; Chang, Yia-Chung; Hsue, Chen-Shiung
2016-05-01
A mixed basis approach based on density functional theory is extended to one-dimensional (1D) systems. The basis functions here are taken to be the localized B-splines for the two finite non-periodic dimensions and the plane waves for the third periodic direction. This approach will significantly reduce the number of the basis and therefore is computationally efficient for the diagonalization of the Kohn-Sham Hamiltonian. For 1D systems, B-spline polynomials are particularly useful and efficient in two-dimensional spatial integrations involved in the calculations because of their absolute localization. Moreover, B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With such a basis set we can directly calculate the total energy of the isolated system instead of using the conventional supercell model with artificial vacuum regions among the replicas along the two non-periodic directions. The spurious Coulomb interaction between the charged defect and its repeated images by the supercell approach for charged systems can also be avoided. A rigorous formalism for the long-range Coulomb potential of both neutral and charged 1D systems under the mixed basis scheme will be derived. To test the present method, we apply it to study the infinite carbon-dimer chain, graphene nanoribbon, carbon nanotube and positively-charged carbon-dimer chain. The resulting electronic structures are presented and discussed in detail.
Motion characteristic between die and workpiece in spline rolling process with round dies
Directory of Open Access Journals (Sweden)
Da-Wei Zhang
2016-06-01
Full Text Available In the spline rolling process with round dies, additional kinematic compensation is an essential mechanism for improving the division of teeth and pitch accuracy as well as surface quality. The motion characteristic between the die and workpiece under varied center distance in the spline rolling process was investigated. Mathematical models of the instantaneous center of rotation, transmission ratio, and centrodes in the rolling process were established. The models were used to analyze the rolling process of the involute spline with circular dedendum, and the results indicated that (1 with the reduction in the center distance, the instantaneous center moves toward workpiece, and the transmission ratio increases at first and then decreases; (2 the variations in the instantaneous center and transmission ratio are discontinuous, presenting an interruption when the involute flank begins to be formed; (3 the change in transmission ratio at the forming stage of the workpiece with the involute flank can be negligible; and (4 the centrode of the workpiece is an Archimedes line whose polar radius reduces, and the centrode of the rolling die is similar to Archimedes line when the workpiece is with the involute flank.
Directory of Open Access Journals (Sweden)
Geetha M
2012-03-01
Full Text Available Sign language is the most natural way of expression for the deaf community. The urge to support the integration of deaf people into the hearing society made the automatic sign language recognition, an area of interest for the researchers. Indian Sign Language (ISL is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. In this paper we propose a novel vision-based recognition of Indian Sign Language Alphabets and Numerals using B-Spline Approximation. Gestures of ISL alphabets are complex since it involves the gestures of both the hands together. Our algorithm approximates the boundary extracted from the Region of Interest, to a B-Spline curve by taking the Maximum Curvature Points (MCPs as the Control points. Then the B-Spline curve is subjected to iterations for smoothening resulting in the extraction of Key Maximum Curvature points (KMCPs, which are the key contributors of the gesture shape. Hence a translation & scale invariant feature vector is obtained from the spatial locations of the KMCPs in the 8 Octant Regions of the 2D Space which isgiven for classification.
Affine with B-Spline Registration based Retrieval using Distance Metrics
Directory of Open Access Journals (Sweden)
Swarnambiga AYYACHAMY
2014-06-01
Full Text Available Developing a two stage framework is the purpose dealt in this paper. This rely on the Affine transformation and B –Spline for registration of medical images as the first stage of the framework and retrieval of medical images using distance metrics as the second stage in the framework. Affine with B-Spline registration based retrieval methods have been dealt in this paper. Evaluation of the framework using images extracted from the Affine with B-Spline registration are applied for the retrieval of medical images performing registration based retrieval. Quantitative analysis is performed to show the registration based retrieval methods perform well with comparable results and presents a summary of the results obtained. This work brings three major advantages as conclusion. First, medical images are conveniently retrieved from the database for effective clinical comparison, diagnosis and verification and also serving as a guidance tool. Second, coping registration techniques with monomodal medical images for more detailed view of images. Third, driving and tracking the entire lifecycle of this medical process would be easier with this application which permits immediate access to all patients’ data stored in a medical repository. Conclusions drawn out of the proposed schemes are listed and justified.
An optimal discrete operator for the two-dimensional spline filter
International Nuclear Information System (INIS)
Digital filtering techniques are indispensable tools for analyzing and evaluating surface topography data. Among the conventional digital filters, the Gaussian filter is the most commonly used filtering technique for both one-dimensional and two-dimensional data. This is because of isotropic and zero-phase transmission characteristics. However, in the filtering process with the Gaussian filter, additional run-in and run-out regions are usually needed due to its large end-effects. To overcome this disadvantage that supplementary profile data are needed to reduce the end-effects, the one-dimensional spline filter was introduced. At present, it is widely accepted as a practical filtering technique and published as ISO/TC16610-22. In fact, a successive application of the one-dimensional spline filter to the two-dimensional data in the orthogonal directions may lead to an anisotropic amplitude characteristic. In this paper, a purely two-dimensional discrete spline filter is proposed and its computational procedure is also described, which is able to approximate the isotropic frequency response in an ideal manner through a least-squares optimization technique
Design of Low-Pass Digital Differentiators Based on B-splines
Directory of Open Access Journals (Sweden)
Zijun He
2014-07-01
Full Text Available This paper describes a new method for designing low-pass differentiators that could be widely suitable for low-frequency signals with different sampling rates. The method is based on the differential property of convolution and the derivatives of B-spline bias functions. The first order differentiator is just constructed by the first derivative of the B-spline of degree 5 or 4. A high (>2 order low-pass differentiator is constructed by cascading two low order differentiators, of which the coefficients are obtained from the nth derivative of a B-spline of degree n+2 expanded by factor a. In this paper, the properties of the proposed differentiators were presented. In addition, we gave the examples of designing the first to sixth order differentiators, and several simulations, including the effects of the factor a on the results and the anti-noise capability of the proposed differentiators. These properties analysis and simulations indicate that the proposed differentiator can be applied to a wide range of low-frequency signals, and the trade-off between noise- reduction and signal preservation can be made by selecting the maximum allowable value of a.
Optimization of Bayesian Emission tomographic reconstruction for region of interest quantitation
Energy Technology Data Exchange (ETDEWEB)
Qi, Jinyi
2003-01-10
Region of interest (ROI) quantitation is an important task in emission tomography (e.g., positron emission tomography and single photon emission computed tomography). It is essential for exploring clinical factors such as tumor activity, growth rate, and the efficacy of therapeutic interventions. Bayesian methods based on the maximum a posteriori principle (or called penalized maximum likelihood methods) have been developed for emission image reconstructions to deal with the low signal to noise ratio of the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the smoothing parameter of the image prior in Bayesian reconstruction controls the resolution and noise trade-off and hence affects ROI quantitation. In this paper we present an approach for choosing the optimum smoothing parameter in Bayesian reconstruction for ROI quantitation. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Building on the recent progress on deriving the approximate expressions for the local impulse response function and the covariance matrix, we derived simplied theoretical expressions for the bias, the variance, and the ensemble mean squared error (EMSE) of the ROI quantitation. One problem in evaluating ROI quantitation is that the truth is often required for calculating the bias. This is overcome by using ensemble distribution of the activity inside the ROI and computing the average EMSE. The resulting expressions allow fast evaluation of the image quality for different smoothing parameters. The optimum smoothing parameter of the image prior can then be selected to minimize the EMSE.
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Smooth Wilson Loops in N=4 Non-Chiral Superspace
Beisert, Niklas; Plefka, Jan; Vergu, Cristian
2015-01-01
We consider a supersymmetric Wilson loop operator for 4d N=4 super Yang-Mills theory which is the natural object dual to the AdS_5 x S^5 superstring in the AdS/CFT correspondence. It generalizes the traditional bosonic 1/2 BPS Maldacena-Wilson loop operator and completes recent constructions in the literature to smooth (non-light-like) loops in the full N=4 non-chiral superspace. This Wilson loop operator enjoys global superconformal and local kappa-symmetry of which a detailed discussion is given. Moreover, the finiteness of its vacuum expectation value is proven at leading order in perturbation theory. We determine the leading vacuum expectation value for general paths both at the component field level up to quartic order in anti-commuting coordinates and in the full non-chiral superspace in suitable gauges. Finally, we discuss loops built from quadric splines joined in such a way that the path derivatives are continuous at the intersection.
Smooth Wilson loops in N=4 non-chiral superspace
Beisert, Niklas; Müller, Dennis; Plefka, Jan; Vergu, Cristian
2015-12-01
We consider a supersymmetric Wilson loop operator for 4d N = 4 super Yang-Mills theory which is the natural object dual to the AdS 5 × S 5 superstring in the AdS/CFT correspondence. It generalizes the traditional bosonic 1 /2 BPS Maldacena-Wilson loop operator and completes recent constructions in the literature to smooth (non-light-like) loops in the full N=4 non-chiral superspace. This Wilson loop operator enjoys global super-conformal and local kappa-symmetry of which a detailed discussion is given. Moreover, the finiteness of its vacuum expectation value is proven at leading order in perturbation theory. We determine the leading vacuum expectation value for general paths both at the component field level up to quartic order in anti-commuting coordinates and in the full non-chiral superspace in suitable gauges. Finally, we discuss loops built from quadric splines joined in such a way that the path derivatives are continuous at the intersection.
Bayesian analysis of rare events
Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang
2016-06-01
In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.
Smooth quantum gravity: Exotic smoothness and Quantum gravity
Asselmeyer-Maluga, Torsten
2016-01-01
Over the last two decades, many unexpected relations between exotic smoothness, e.g. exotic $\\mathbb{R}^{4}$, and quantum field theory were found. Some of these relations are rooted in a relation to superstring theory and quantum gravity. Therefore one would expect that exotic smoothness is directly related to the quantization of general relativity. In this article we will support this conjecture and develop a new approach to quantum gravity called \\emph{smooth quantum gravity} by using smooth 4-manifolds with an exotic smoothness structure. In particular we discuss the appearance of a wildly embedded 3-manifold which we identify with a quantum state. Furthermore, we analyze this quantum state by using foliation theory and relate it to an element in an operator algebra. Then we describe a set of geometric, non-commutative operators, the skein algebra, which can be used to determine the geometry of a 3-manifold. This operator algebra can be understood as a deformation quantization of the classical Poisson alge...
Bayesian methods for measures of agreement
Broemeling, Lyle D
2009-01-01
Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...
Plug & Play object oriented Bayesian networks
DEFF Research Database (Denmark)
Bangsø, Olav; Flores, J.; Jensen, Finn Verner
2003-01-01
Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have...... been shown to be quite suitable for dynamic domains as well. However, processing object oriented Bayesian networks in practice does not take advantage of their modular structure. Normally the object oriented Bayesian network is transformed into a Bayesian network and, inference is performed...... by constructing a junction tree from this network. In this paper we propose a method for translating directly from object oriented Bayesian networks to junction trees, avoiding the intermediate translation. We pursue two main purposes: firstly, to maintain the original structure organized in an instance tree...
Smoothing and projecting age-specific probabilities of death by TOPALS
Directory of Open Access Journals (Sweden)
Joop de Beer
2012-10-01
Full Text Available BACKGROUND TOPALS is a new relational model for smoothing and projecting age schedules. The model is operationally simple, flexible, and transparent. OBJECTIVE This article demonstrates how TOPALS can be used for both smoothing and projecting age-specific mortality for 26 European countries and compares the results of TOPALS with those of other smoothing and projection methods. METHODS TOPALS uses a linear spline to describe the ratios between the age-specific death probabilities of a given country and a standard age schedule. For smoothing purposes I use the average of death probabilities over 15 Western European countries as standard, whereas for projection purposes I use an age schedule of 'best practice' mortality. A partial adjustment model projects how quickly the death probabilities move in the direction of the best-practice level of mortality. RESULTS On average, TOPALS performs better than the Heligman-Pollard model and the Brass relational method in smoothing mortality age schedules. TOPALS can produce projections that are similar to those of the Lee-Carter method, but can easily be used to produce alternative scenarios as well. This article presents three projections of life expectancy at birth for the year 2060 for 26 European countries. The Baseline scenario assumes a continuation of the past trend in each country, the Convergence scenario assumes that there is a common trend across European countries, and the Acceleration scenario assumes that the future decline of death probabilities will exceed that in the past. The Baseline scenario projects that average European life expectancy at birth will increase to 80 years for men and 87 years for women in 2060, whereas the Acceleration scenario projects an increase to 90 and 93 years respectively. CONCLUSIONS TOPALS is a useful new tool for demographers for both smoothing age schedules and making scenarios.
Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods
Zhu, Weixuan
2016-01-01
The definition of vectors of dependent random probability measures is a topic of interest in Bayesian nonparametrics. They represent dependent nonparametric prior distributions that are useful for modelling observables for which specific covariate values are known. Our first contribution is the introduction of novel multivariate vectors of two-parameter Poisson-Dirichlet process. The dependence is induced by applying a L´evy copula to the marginal L´evy intensities. Our attenti...
Bayesian versus 'plain-vanilla Bayesian' multitarget statistics
Mahler, Ronald P. S.
2004-08-01
Finite-set statistics (FISST) is a direct generalization of single-sensor, single-target Bayes statistics to the multisensor-multitarget realm, based on random set theory. Various aspects of FISST are being investigated by several research teams around the world. In recent years, however, a few partisans have claimed that a "plain-vanilla Bayesian approach" suffices as down-to-earth, "straightforward," and general "first principles" for multitarget problems. Therefore, FISST is mere mathematical "obfuscation." In this and a companion paper I demonstrate the speciousness of these claims. In this paper I summarize general Bayes statistics, what is required to use it in multisensor-multitarget problems, and why FISST is necessary to make it practical. Then I demonstrate that the "plain-vanilla Bayesian approach" is so heedlessly formulated that it is erroneous, not even Bayesian denigrates FISST concepts while unwittingly assuming them, and has resulted in a succession of algorithms afflicted by inherent -- but less than candidly acknowledged -- computational "logjams."
Selective Smoothed Finite Element Method
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The paper examines three selective schemes for the smoothed finite element method (SFEM) which was formulated by incorporating a cell-wise strain smoothing operation into the standard compatible finite element method (FEM). These selective SFEM schemes were formulated based on three selective integration FEM schemes with similar properties found between the number of smoothing cells in the SFEM and the number of Gaussian integration points in the FEM. Both scheme 1 and scheme 2 are free of nearly incompressible locking, but scheme 2 is more general and gives better results than scheme 1. In addition, scheme 2 can be applied to anisotropic and nonlinear situations, while scheme 1 can only be applied to isotropic and linear situations. Scheme 3 is free of shear locking. This scheme can be applied to plate and shell problems. Results of the numerical study show that the selective SFEM schemes give more accurate results than the FEM schemes.
Non-Parametric Bayesian Registration (NParBR) of Body Tumors in DCE-MRI Data.
Pilutti, David; Strumia, Maddalena; Buchert, Martin; Hadjidemetriou, Stathis
2016-04-01
The identification of tumors in the internal organs of chest, abdomen, and pelvis anatomic regions can be performed with the analysis of Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) data. The contrast agent is accumulated differently by pathologic and healthy tissues and that results in a temporally varying contrast in an image series. The internal organs are also subject to potentially extensive movements mainly due to breathing, heart beat, and peristalsis. This contributes to making the analysis of DCE-MRI datasets challenging as well as time consuming. To address this problem we propose a novel pairwise non-rigid registration method with a Non-Parametric Bayesian Registration (NParBR) formulation. The NParBR method uses a Bayesian formulation that assumes a model for the effect of the distortion on the joint intensity statistics, a non-parametric prior for the restored statistics, and also applies a spatial regularization for the estimated registration with Gaussian filtering. A minimally biased intra-dataset atlas is computed for each dataset and used as reference for the registration of the time series. The time series registration method has been tested with 20 datasets of liver, lungs, intestines, and prostate. It has been compared to the B-Splines and to the SyN methods with results that demonstrate that the proposed method improves both accuracy and efficiency. PMID:26672032
Sinha, Samiran
2009-08-10
We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.
Sarkar, Abhra
2014-10-02
We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.
Bayesian inference on proportional elections.
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Bayesian approach to rough set
Marwala, Tshilidzi
2007-01-01
This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.
Bayesian priors for transiting planets
Kipping, David M
2016-01-01
As astronomers push towards discovering ever-smaller transiting planets, it is increasingly common to deal with low signal-to-noise ratio (SNR) events, where the choice of priors plays an influential role in Bayesian inference. In the analysis of exoplanet data, the selection of priors is often treated as a nuisance, with observers typically defaulting to uninformative distributions. Such treatments miss a key strength of the Bayesian framework, especially in the low SNR regime, where even weak a priori information is valuable. When estimating the parameters of a low-SNR transit, two key pieces of information are known: (i) the planet has the correct geometric alignment to transit and (ii) the transit event exhibits sufficient signal-to-noise to have been detected. These represent two forms of observational bias. Accordingly, when fitting transits, the model parameter priors should not follow the intrinsic distributions of said terms, but rather those of both the intrinsic distributions and the observational ...
Bayesian Source Separation and Localization
Knuth, K H
1998-01-01
The problem of mixed signals occurs in many different contexts; one of the most familiar being acoustics. The forward problem in acoustics consists of finding the sound pressure levels at various detectors resulting from sound signals emanating from the active acoustic sources. The inverse problem consists of using the sound recorded by the detectors to separate the signals and recover the original source waveforms. In general, the inverse problem is unsolvable without additional information. This general problem is called source separation, and several techniques have been developed that utilize maximum entropy, minimum mutual information, and maximum likelihood. In previous work, it has been demonstrated that these techniques can be recast in a Bayesian framework. This paper demonstrates the power of the Bayesian approach, which provides a natural means for incorporating prior information into a source model. An algorithm is developed that utilizes information regarding both the statistics of the amplitudes...
Bayesian Inference for Radio Observations
Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin
2015-01-01
(Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...
G1 Continuity Conditions of B-spline Surfaces%B样条曲面间的G1连续条件
Institute of Scientific and Technical Information of China (English)
车翔玖; 梁学章
2002-01-01
According to the B-spline theory and Boehm algorithm, this paper presents severalnecessary and sufficient G1 continuity conditions between two adjacent B-spline surfaces. In orderto meet the need of application, a kind of sufficient conditions of G1 continuity are developed, anda kind of sufficient conditions of G1 continuity among N(N ＞ 2) patch B-spline surfaces meetingat a common corner are given at the end.
Institute of Scientific and Technical Information of China (English)
CHI Wen-xue; WANG Jin-feng; LI Xin-hu; ZHENG Xiao-ying; LIAO Yi-lan
2007-01-01
Objective: To estimate the prevalence rates of neural tube defects (NTDs) in Heshun County, Shanxi Province, China by Bayesian smoothing technique. Methods: A total of 80 infants in the study area who were diagnosed with NTDs were analyzed. Two mapping techniques were then used. Firstly, the GIS software ArcGIS was used to map the crude prevalence rates. Secondly,the data were smoothed by the method of empirical Bayes estimation. Results: The classical statistical approach produced an extremely dishomogeneous map, while the Bayesian map was much smoother and more interpretable. The maps produced by the Bayesian technique indicate the tendency of villages in the southeastern region to produce higher prevalence or risk values. Conclusions: The Bayesian smoothing technique addresses the issue of heterogeneity in the population at risk and it is therefore recommended for use in explorative mapping of birth defects. This approach provides procedures to identify spatial health risk levels and assists in generating hypothesis that will be investigated in further detail.
A Bayesian Nonparametric IRT Model
Karabatsos, George
2015-01-01
This paper introduces a flexible Bayesian nonparametric Item Response Theory (IRT) model, which applies to dichotomous or polytomous item responses, and which can apply to either unidimensional or multidimensional scaling. This is an infinite-mixture IRT model, with person ability and item difficulty parameters, and with a random intercept parameter that is assigned a mixing distribution, with mixing weights a probit function of other person and item parameters. As a result of its flexibility...
Elements of Bayesian experimental design
Energy Technology Data Exchange (ETDEWEB)
Sivia, D.S. [Rutherford Appleton Lab., Oxon (United Kingdom)
1997-09-01
We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.
Bayesian kinematic earthquake source models
Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.
2009-12-01
Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high
Bayesian Stable Isotope Mixing Models
Parnell, Andrew C.; Phillips, Donald L.; Bearhop, Stuart; Semmens, Brice X.; Ward, Eric J.; Moore, Jonathan W.; Andrew L Jackson; Inger, Richard
2012-01-01
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixture. The most widely used application is quantifying the diet of organisms based on the food sources they have been observed to consume. At the centre of the multivariate statistical model we propose is a compositional m...
Bayesian Network--Response Regression
WANG, LU; Durante, Daniele; Dunson, David B.
2016-01-01
There is an increasing interest in learning how human brain networks vary with continuous traits (e.g., personality, cognitive abilities, neurological disorders), but flexible procedures to accomplish this goal are limited. We develop a Bayesian semiparametric model, which combines low-rank factorizations and Gaussian process priors to allow flexible shifts of the conditional expectation for a network-valued random variable across the feature space, while including subject-specific random eff...
Bayesian segmentation of hyperspectral images
Mohammadpour, Adel; Mohammad-Djafari, Ali
2007-01-01
In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.
Bayesian segmentation of hyperspectral images
Mohammadpour, Adel; Féron, Olivier; Mohammad-Djafari, Ali
2004-11-01
In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.
Bayesian analysis of contingency tables
Gómez Villegas, Miguel A.; González Pérez, Beatriz
2005-01-01
The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first...
Bayesian estimation of turbulent motion
Héas, P.; Herzet, C.; Mémin, E.; Heitz, D.; P. D. Mininni
2013-01-01
International audience Based on physical laws describing the multi-scale structure of turbulent flows, this article proposes a regularizer for fluid motion estimation from an image sequence. Regularization is achieved by imposing some scale invariance property between histograms of motion increments computed at different scales. By reformulating this problem from a Bayesian perspective, an algorithm is proposed to jointly estimate motion, regularization hyper-parameters, and to select the ...
Space Shuttle RTOS Bayesian Network
Morris, A. Terry; Beling, Peter A.
2001-01-01
With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores
Bayesian second law of thermodynamics.
Bartolotta, Anthony; Carroll, Sean M; Leichenauer, Stefan; Pollack, Jason
2016-08-01
We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as ΔH(ρ_{m},ρ)+〈Q〉_{F|m}≥0, where ΔH(ρ_{m},ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρ_{m} and 〈Q〉_{F|m} is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples. PMID:27627241
Bayesian second law of thermodynamics
Bartolotta, Anthony; Carroll, Sean M.; Leichenauer, Stefan; Pollack, Jason
2016-08-01
We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as Δ H (ρm,ρ ) + F |m≥0 , where Δ H (ρm,ρ ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm and F |m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples.
Numerical simulation of liquid jet breakup using smoothed particle hydrodynamics (SPH)
Pourabdian, Majid; Morad, Mohammad Reza
2016-01-01
In this paper, breakup of liquid jet is simulated using smoothed particle hydrodynamics (SPH) which is a meshless Lagrangian numerical method. For this aim, flow governing equations are discretized based on SPH method. In this paper, SPHysics open source code has been utilized for numerical solutions. Therefore, the mentioned code has been developed by adding the surface tension effects. The proposed method is then validated using dam break with obstacle problem. Finally, simulation of twodimensional liquid jet flow is carried out and its breakup behavior considering one-phase flow is investigated. Length of liquid breakup in Rayleigh regime is calculated for various flow conditions such as different Reynolds and Weber numbers and the results are validated by an experimental correlation. The whole numerical solutions are accomplished for both Wendland and cubic spline kernel functions and Wendland kernel function gave more accurate results. The results are compared to MPS method for inviscid liquid as well. T...
Cervical cancer survival prediction using hybrid of SMOTE, CART and smooth support vector machine
Purnami, S. W.; Khasanah, P. M.; Sumartini, S. H.; Chosuvivatwong, V.; Sriplung, H.
2016-04-01
According to the WHO, every two minutes there is one patient who died from cervical cancer. The high mortality rate is due to the lack of awareness of women for early detection. There are several factors that supposedly influence the survival of cervical cancer patients, including age, anemia status, stage, type of treatment, complications and secondary disease. This study wants to classify/predict cervical cancer survival based on those factors. Various classifications methods: classification and regression tree (CART), smooth support vector machine (SSVM), three order spline SSVM (TSSVM) were used. Since the data of cervical cancer are imbalanced, synthetic minority oversampling technique (SMOTE) is used for handling imbalanced dataset. Performances of these methods are evaluated using accuracy, sensitivity and specificity. Results of this study show that balancing data using SMOTE as preprocessing can improve performance of classification. The SMOTE-SSVM method provided better result than SMOTE-TSSVM and SMOTE-CART.
Smooth halos in the cosmic web
Gaite, Jose
2014-01-01
Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis alread...
Smooth maps from clumpy data: Covariance analysis
Lombardi, Marco; Schneider, Peter
2002-01-01
Interpolation techniques play a central role in Astronomy, where one often needs to smooth irregularly sampled data into a smooth map. In a previous article (Lombardi & Schneider 2001), we have considered a widely used smoothing technique and we have evaluated the expectation value of the smoothed map under a number of natural hypotheses. Here we proceed further on this analysis and consider the variance of the smoothed map, represented by a two-point correlation function. We show that two ma...
Texture-preserving Bayesian image reconstruction for low-dose CT
Zhang, Hao; Han, Hao; Hu, Yifan; Liu, Yan; Ma, Jianhua; Li, Lihong; Moore, William; Liang, Zhengrong
2016-03-01
Markov random field (MRF) model has been widely used in Bayesian image reconstruction to reconstruct piecewise smooth images in the presence of noise, such as in low-dose X-ray computed tomography (LdCT). While it can preserve edge sharpness via edge-preserving potential function, its regional smoothing may sacrifice tissue image textures, which have been recognized as useful imaging biomarkers, and thus it compromises clinical tasks such as differentiating malignant vs. benign lesions, e.g., lung nodule or colon polyp. This study aims to shift the edge preserving regional noise smoothing paradigm to texture-preserving framework for LdCT image reconstruction while retaining the advantage of MRF's neighborhood system on edge preservation. Specifically, we adapted the MRF model to incorporate the image textures of lung, bone, fat, muscle, etc. from previous full-dose CT scan as a priori knowledge for texture-preserving Bayesian reconstruction of current LdCT images. To show the feasibility of proposed reconstruction framework, experiments using clinical patient scans (with lung nodule or colon polyp) were conducted. The experimental outcomes showed noticeable gain by the a priori knowledge for LdCT image reconstruction with the well-known Haralick texture measures. Thus, it is conjectured that texture-preserving LdCT reconstruction has advantages over edge-preserving regional smoothing paradigm for texture-specific clinical applications.
12th Brazilian Meeting on Bayesian Statistics
Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo
2015-01-01
Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...
3D Profile Filter Algorithm Based on Parallel Generalized B-spline Approximating Gaussian
Institute of Scientific and Technical Information of China (English)
REN Zhiying; GAO Chenghui; SHEN Ding
2015-01-01
Currently, the approximation methods of the Gaussian filter by some other spline filters have been developed. However, these methods are only suitable for the study of one-dimensional filtering, when these methods are used for three-dimensional filtering, it is found that a rounding error and quantization error would be passed to the next in every part. In this paper, a new and high-precision implementation approach for Gaussian filter is described, which is suitable for three-dimensional reference filtering. Based on the theory of generalized B-spline function and the variational principle, the transmission characteristics of a digital filter can be changed through the sensitivity of the parameters (t1, t2), and which can also reduce the rounding error and quantization error by the filter in a parallel form instead of the cascade form. Finally, the approximation filter of Gaussian filter is obtained. In order to verify the feasibility of the new algorithm, the reference extraction of the conventional methods are also used and compared. The experiments are conducted on the measured optical surface, and the results show that the total calculation by the new algorithm only requires 0.07 s for 480´480 data points;the amplitude deviation between the reference of the parallel form filter and the Gaussian filter is smaller;the new method is closer to the characteristic of the Gaussian filter through the analysis of three-dimensional roughness parameters, comparing with the cascade generalized B-spline approximating Gaussian. So the new algorithm is also efficient and accurate for the implementation of Gaussian filter in the application of surface roughness measurement.
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan
2004-01-01
We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...... and differentiating these circuits in time linear in their size. We report on experimental results showing the successful compilation, and efficient inference, on relational Bayesian networks whose {\\primula}--generated propositional instances have thousands of variables, and whose jointrees have clusters...
Bayesian Posterior Distributions Without Markov Chains
Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.
2012-01-01
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...
Steady-state solution of the PTC thermistor problem using a quadratic spline finite element method
Directory of Open Access Journals (Sweden)
Bahadir A. R.
2002-01-01
Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.
Numerical solution of the controlled Duffing oscillator by semi-orthogonal spline wavelets
Lakestani, M.; Razzaghi, M.; Dehghan, M.
2006-09-01
This paper presents a numerical method for solving the controlled Duffing oscillator. The method can be extended to nonlinear calculus of variations and optimal control problems. The method is based upon compactly supported linear semi-orthogonal B-spline wavelets. The differential and integral expressions which arise in the system dynamics, the performance index and the boundary conditions are converted into some algebraic equations which can be solved for the unknown coefficients. Illustrative examples are included to demonstrate the validity and applicability of the technique.
B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms
Bueno, G.; Sánchez, S.; Ruiz, M.
2006-10-01
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
A numerical solution of the Burgers' equation using septic B-splines
Energy Technology Data Exchange (ETDEWEB)
Ramadan, Mohamed A. [Department of Mathematics, Faculty of Science, Menoufia University, Shiben El-Koom (Egypt)] e-mail: mramadan@mailer.eun.eg; El-Danaf, Talaat S. [Department of Mathematics, Faculty of Science, Menoufia University, Shiben El-Koom (Egypt); Abd Alaal, Faisal E.I. [Department of Mathematics, Faculty of Science, Menoufia University, Shiben El-Koom (Egypt)
2005-11-01
In this paper, numerical solutions of the nonlinear Burgers' equation are obtained by a method based on collocation of septic B-splines over finite elements. Applying the Von-Neumann stability analysis, the proposed method is shown to be unconditionally stable. Numerical solutions of the modified Burgers' equation are also obtained by making a simple change of the suggested numerical scheme for the Burgers' equation. The accuracy of the presented method is demonstrated by two test problems. The numerical results are found to be in good agreement with the exact solutions.
Institute of Scientific and Technical Information of China (English)
秦开怀; 范刚; 等
1994-01-01
The new algorithms for finding B-Spline or Bezier curves and surfaces intersections using recursive subdivision techniques are presented,which use extrapolating acceleration technique,and have convergent precision of order 2.Matrix method is used to subdivide the curves or surfaces which makes the subdivision more concise and intuitive.Dividing depths of Bezier curves and surfaces are used to subdivide the curves or surfaces adaptively.Therefore the convergent precision and the computing efficiency of finding the intersections of curves and surfaces have been improved by the methods proposed in the paper.
Simulating the focusing of light onto 1D nanostructures with a B-spline modal method
Bouchon, P.; Chevalier, P.; Héron, S.; Pardo, F.; Pelouard, J.-L.; Haïdar, R.
2015-03-01
Focusing the light onto nanostructures thanks to spherical lenses is a first step to enhance the field, and is widely used in applications, in particular for enhancing non-linear effects like the second harmonic generation. Nonetheless, the electromagnetic response of such nanostructures, which have subwavelength patterns, to a focused beam can not be described by the simple ray tracing formalism. Here, we present a method to compute the response to a focused beam, based on the B-spline modal method. The simulation of a gaussian focused beam is obtained thanks to a truncated decomposition on plane waves computed on a single period, which limits the computation burden.
A cubic B-spline Galerkin approach for the numerical simulation of the GEW equation
Directory of Open Access Journals (Sweden)
S. Battal Gazi Karakoç
2016-02-01
Full Text Available The generalized equal width (GEW wave equation is solved numerically by using lumped Galerkin approach with cubic B-spline functions. The proposed numerical scheme is tested by applying two test problems including single solitary wave and interaction of two solitary waves. In order to determine the performance of the algorithm, the error norms L2 and L∞ and the invariants I1, I2 and I3 are calculated. For the linear stability analysis of the numerical algorithm, von Neumann approach is used. As a result, the obtained findings show that the presented numerical scheme is preferable to some recent numerical methods.
Energy Spectra of the Confined Atoms Obtained by Using B-Splines
Institute of Scientific and Technical Information of China (English)
SHI Ting-Yun; BAO Cheng-Guang; LI Bai-Wen
2001-01-01
We have calculated the energy spectra of one- and two-electron atoms (ions) centered in an impenetrable spherical box by variational method with B-splines as basis functions. Accurate results are obtained for both large and small radii of confinement. The critical box radius of confined hydrogen atom is also calculated to show the usefulness of our method. A partial energy degeneracy in confined hydrogen atom is found when the radius of spherical box is equal to the distance at which a node of single-node wavefunctions of free hydrogen atom is located.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE
Institute of Scientific and Technical Information of China (English)
Ming HAN; Yuanyao DING
2004-01-01
This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.
BayesLine: Bayesian Inference for Spectral Estimation of Gravitational Wave Detector Noise
Littenberg, Tyson B
2014-01-01
Gravitational wave data from ground-based detectors is dominated by instrument noise. Signals will be comparatively weak, and our understanding of the noise will influence detection confidence and signal characterization. Mis-modeled noise can produce large systematic biases in both model selection and parameter estimation. Here we introduce a multi-component, variable dimension, parameterized model to describe the Gaussian-noise power spectrum for data from ground-based gravitational wave interferometers. Called BayesLine, the algorithm models the noise power spectral density using cubic splines for smoothly varying broad-band noise and Lorentzians for narrow-band line features in the spectrum. We describe the algorithm and demonstrate its performance on data from the fifth and sixth LIGO science runs. Once fully integrated into LIGO/Virgo data analysis software, BayesLine will produce accurate spectral estimation and provide a means for marginalizing inferences drawn from the data over all plausible noise s...
Bayesian 2D Deconvolution: A Model for Diffuse Ultrasound Scattering
Directory of Open Access Journals (Sweden)
Oddvar Husby
2001-10-01
Full Text Available Observed medical ultrasound images are degraded representations of the true acoustic tissue reflectance. The degradation is due to blur and speckle, and significantly reduces the diagnostic value of the images. In order to remove both blur and speckle we have developed a new statistical model for diffuse scattering in 2D ultrasound radio-frequency images, incorporating both spatial smoothness constraints and a physical model for diffuse scattering. The modeling approach is Bayesian in nature, and we use Markov chain Monte Carlo methods to obtain the restorations. The results from restorations of some real and simulated radio-frequency ultrasound images are presented and compared with results produced by Wiener filtering.
Exclusive breastfeeding practice in Nigeria: a bayesian stepwise regression analysis.
Gayawan, Ezra; Adebayo, Samson B; Chitekwe, Stanley
2014-11-01
Despite the importance of breast milk, the prevalence of exclusive breastfeeding (EBF) in Nigeria is far lower than what has been recommended for developing countries. Worse still, the practise has been on downward trend in the country recently. This study was aimed at investigating the determinants and geographical variations of EBF in Nigeria. Any intervention programme would require a good knowledge of factors that enhance the practise. A pooled data set from Nigeria Demographic and Health Survey conducted in 1999, 2003, and 2008 were analyzed using a Bayesian stepwise approach that involves simultaneous selection of variables and smoothing parameters. Further, the approach allows for geographical variations at a highly disaggregated level of states to be investigated. Within a Bayesian context, appropriate priors are assigned on all the parameters and functions. Findings reveal that education of women and their partners, place of delivery, mother's age at birth, and current age of child are associated with increasing prevalence of EBF. However, visits for antenatal care during pregnancy are not associated with EBF in Nigeria. Further, results reveal considerable geographical variations in the practise of EBF. The likelihood of exclusively breastfeeding children are significantly higher in Kwara, Kogi, Osun, and Oyo states but lower in Jigawa, Katsina, and Yobe. Intensive interventions that can lead to improved practise are required in all states in Nigeria. The importance of breastfeeding needs to be emphasized to women during antenatal visits as this can encourage and enhance the practise after delivery. PMID:24619227
Analysis of a Gyroscope's Rotor Nonlinear Supported Magnetic Field Based on the B-Spline Wavelet-FEM
Institute of Scientific and Technical Information of China (English)
LIU Jian-feng; YUAN Gan-nan; HUANG Xu; YU Li
2005-01-01
A supported framework of a gyroscope′s rotor is designed and the B-Spline wavelet finite element model of nonlinear supported magnetic field is worked out. A new finite element space is studied in which the scaling function of the B-spline wavelet is considered as the shape function of a tetrahedron. The magnetic field is spited by an artificial absorbing body which used the condition of field radiating, so the solution is unique. The resolution is improved via the varying gradient of the B-spline function under the condition of unchanging gridding. So there are some advantages in dealing with the focus flux and a high varying gradient result from a nonlinear magnetic field. The result is more practical. Plots of flux and in the space is studied via simulating the supported system model. The results of the study are useful in the research of the supported magnetic system for the gyroscope rotor.
Very Smooth Points of Spaces of Operators
Indian Academy of Sciences (India)
T S S R K Rao
2003-02-01
In this paper we study very smooth points of Banach spaces with special emphasis on spaces of operators. We show that when the space of compact operators is an -ideal in the space of bounded operators, a very smooth operator attains its norm at a unique vector (up to a constant multiple) and ( ) is a very smooth point of the range space. We show that if for every equivalent norm on a Banach space, the dual unit ball has a very smooth point then the space has the Radon–Nikodým property. We give an example of a smooth Banach space without any very smooth points.
Bayesian Methods and Universal Darwinism
Campbell, John
2009-12-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.
Bayesian phylogeography finds its roots.
Directory of Open Access Journals (Sweden)
Philippe Lemey
2009-09-01
Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.
Numeracy, frequency, and Bayesian reasoning
Directory of Open Access Journals (Sweden)
Gretchen B. Chapman
2009-02-01
Full Text Available Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed.
Bayesian Query-Focused Summarization
Daumé, Hal
2009-01-01
We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.
Bayesian Sampling using Condition Indicators
DEFF Research Database (Denmark)
Faber, Michael H.; Sørensen, John Dalsgaard
2002-01-01
The problem of control quality of components is considered for the special case where the acceptable failure rate is low, the test costs are high and where it may be difficult or impossible to test the condition of interest directly. Based on the classical control theory and the concept...... of condition indicators introduced by Benjamin and Cornell (1970) a Bayesian approach to quality control is formulated. The formulation is then extended to the case where the quality control is based on sampling of indirect information about the condition of the components, i.e. condition indicators...
Proteomics Improves the Prediction of Burns Mortality: Results from Regression Spline Modeling
Finnerty, Celeste C.; Ju, Hyunsu; Spratt, Heidi; Victor, Sundar; Jeschke, Marc G.; Hegde, Sachin; Bhavnani, Suresh K.; Luxon, Bruce A.; Brasier, Allan R.; Herndon, David N.
2012-01-01
Prediction of mortality in severely burned patients remains unreliable. Although clinical covariates and plasma protein abundance have been used with varying degrees of success, the triad of burn size, inhalation injury, and age remains the most reliable predictor. We investigated the effect of combining proteomics variables with these three clinical covariates on prediction of mortality in burned children. Serum samples were collected from 330 burned children (burns covering >25% of the total body surface area) between admission and the time of the first operation for clinical chemistry analyses and proteomic assays of cytokines. Principal component analysis revealed that serum protein abundance and the clinical covariates each provided independent information regarding patient survival. To determine whether combining proteomics with clinical variables improves prediction of patient mortality, we used multivariate adaptive regression splines, since the relationships between analytes and mortality were not linear. Combining these factors increased overall outcome prediction accuracy from 52% to 81% and area under the receiver operating characteristic curve from 0.82 to 0.95. Thus, the predictive accuracy of burns mortality is substantially improved by combining protein abundance information with clinical covariates in a multivariate adaptive regression splines classifier, a model currently being validated in a prospective study. PMID:22686201
Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression
Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.
2015-12-01
Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.
TOOTH CURVES AND ENTIRE CONTACT AREA IN PROCESS OF SPLINE COLD ROLLING
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
In spline rolling process, the contact area between roller and workpiece plays an important role in calculating rolling-force and rolling-moment. For the purpose of studying the contact area, contact state between roller and workpiece in process of spline cold rolling based upon cross rolling is analyzed. According to the suitable hypothesis, the mathematic model of roller-tooth-curve in optional position of rolling process is established. Combing the theory of conjugate curves with the theory of envelope curve, the corresponding mathematic model of workpiece-tooth-curve is established. By utilizing establishing mathematic models, the algorithm of entire contact area in rolling process is created. On the basis of the algorithm, calculation-program is compiled under MATLAB program language environment. The calculation-program actualizes quantitative analysis and quantitative calculation of contact areas. Utilizing the calculation-program, the influence of parameters on contact area is analyzed, and the tendency is consistent with the manufacturing experience. In consideration of rolling-force optimization, the primary process parameters may be selected according to results of calculation. The result of the present study may provide basis for research on rolling-force and rolling-moment.
DBSR_HF: A B-spline Dirac-Hartree-Fock program
Zatsarinny, Oleg; Froese Fischer, Charlotte
2016-05-01
A B-spline version of a general Dirac-Hartree-Fock program is described. The usual differential equations are replaced by a set of generalized eigenvalue problems of the form (Ha -εa B) Pa = 0, where Ha and B are the Hamiltonian and overlap matrices, respectively, and Pa is the two-component relativistic orbit in the B-spline basis. A default universal grid allows for flexible adjustment to different nuclear models. When two orthogonal orbitals are both varied, the energy must also be stationary with respect to orthonormal transformations. At such a stationary point the off-diagonal Lagrange multipliers may be eliminated through projection operators. The self-consistent field procedure exhibits excellent convergence. Several atomic states can be considered simultaneously, including some configuration-interaction calculations. The program provides several options for the treatment of Breit interaction and QED corrections. The information about atoms up to Z = 104 is stored by the program. Along with a simple interface through command-line arguments, this information allows the user to run the program with minimal initial preparations.
Marghany, Maged
2014-06-01
A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.
International Nuclear Information System (INIS)
A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data
Xu, ShengYong; Wu, JuanJuan; Zhu, Li; Li, WeiHao; Wang, YiTian; Wang, Na
2015-12-01
Visual navigation is a fundamental technique of intelligent cotton-picking robot. There are many components and cover in the cotton field, which make difficulties of furrow recognition and trajectory extraction. In this paper, a new field navigation path extraction method is presented. Firstly, the color image in RGB color space is pre-processed by the OTSU threshold algorithm and noise filtering. Secondly, the binary image is divided into numerous horizontally spline areas. In each area connected regions of neighboring images' vertical center line are calculated by the Two-Pass algorithm. The center points of the connected regions are candidate points for navigation path. Thirdly, a series of navigation points are determined iteratively on the principle of the nearest distance between two candidate points in neighboring splines. Finally, the navigation path equation is fitted by the navigation points using the least squares method. Experiments prove that this method is accurate and effective. It is suitable for visual navigation in the complex environment of cotton field in different phases.
B-splines as a Tool to Solve Constraints in Non-Hydrostatic Forecast Model
Subias, Alvaro
2016-01-01
Finite elements has been proven to be an useful tool to discretize the vertical coordinate in the hydrostatic forecast models allowing to define model variables in full levels so that no staggering is needed. In the non-hydrostatic case a constraint in the vertical operators appears (called C1) that does not allow to reduce the set of semi-implicit linear equations to a single equation in one variable as in the analytic case. Recently vertical finite elements based in B-splines have been used with an iterative method to relax the C1 constraint. In this paper we want to develop properly some representations of vertical operators in terms of B-splines in order to keep the C1-constraint. An invertibility relation between integral and derivative operators between vertical velocity and vertical divergence is also presented. The final scope of this paper is to provide a theoretical framework of development of finite element vertical operators to be implemented in the nh-Harmonie model
Smoothing of mixed complementarity problems
Energy Technology Data Exchange (ETDEWEB)
Gabriel, S.A.; More, J.J. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.
1995-09-01
The authors introduce a smoothing approach to the mixed complementarity problem, and study the limiting behavior of a path defined by approximate minimizers of a nonlinear least squares problem. The main result guarantees that, under a mild regularity condition, limit points of the iterates are solutions to the mixed complementarity problem. The analysis is applicable to a wide variety of algorithms suitable for large-scale mixed complementarity problems.
Beam smoothing and temporal effects
International Nuclear Information System (INIS)
Until recently, and in spite of the introduction of smoothing methods, direct drive laser fusion received lots of setbacks from experiments, this being due to nonlinear and anomalous phenomena. This report deals with a method of analysis which, as self-generated von-Laue gratings, preventing the propagation of laser radiation through the outermost plasma corona, and preventing energy deposition. (TEC). 36 refs., 5 figs
Subsampling in Smoothed Range Spaces
Phillips, Jeff M.; Zheng, Yan
2015-01-01
We consider smoothed versions of geometric range spaces, so an element of the ground set (e.g. a point) can be contained in a range with a non-binary value in $[0,1]$. Similar notions have been considered for kernels; we extend them to more general types of ranges. We then consider approximations of these range spaces through $\\varepsilon $-nets and $\\varepsilon $-samples (aka $\\varepsilon$-approximations). We characterize when size bounds for $\\varepsilon $-samples on kernels can be extended...
Smooth Optimization with Approximate Gradient
d'Aspremont, Alexandre
2005-01-01
We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using spar...
Mobile real-time EEG imaging Bayesian inference with sparse, temporally smooth source priors
DEFF Research Database (Denmark)
Hansen, Lars Kai; Hansen, Sofie Therese; Stahlhut, Carsten
2013-01-01
EEG based real-time imaging of human brain function has many potential applications including quality control, in-line experimental design, brain state decoding, and neuro-feedback. In mobile applications these possibilities are attractive as elements in systems for personal state monitoring...
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; Smith, Ralph; Mattingly, John
2016-01-01
In this investigation, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 m x 180 m block in an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Due to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple...
Extended cubic B-spline method for solving a linear system of second-order boundary value problems.
Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md
2016-01-01
A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods.
Brouwer, Charlotte L.; Kierkels, Roel G. J.; van t Veld, Aart A.; Sijtsema, Nanna M.; Meertens, Harm
2014-01-01
Objectives: To explore the effects of computed tomography (CT) image characteristics and B-spline knot spacing (BKS) on the spatial accuracy of a B-spline deformable image registration (DIR) in the head-and-neck geometry. Methods: The effect of image feature content, image contrast, noise, and BKS o
Using Bayesian Networks to Improve Knowledge Assessment
Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra
2013-01-01
In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…
Bayesian analysis of exoplanet and binary orbits
Schulze-Hartung, Tim; Launhardt, Ralf; Henning, Thomas
2012-01-01
We introduce BASE (Bayesian astrometric and spectroscopic exoplanet detection and characterisation tool), a novel program for the combined or separate Bayesian analysis of astrometric and radial-velocity measurements of potential exoplanet hosts and binary stars. The capabilities of BASE are demonstrated using all publicly available data of the binary Mizar A.
Bayesian credible interval construction for Poisson statistics
Institute of Scientific and Technical Information of China (English)
ZHU Yong-Sheng
2008-01-01
The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.
Modeling Diagnostic Assessments with Bayesian Networks
Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego
2007-01-01
This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…
Advances in Bayesian Modeling in Educational Research
Levy, Roy
2016-01-01
In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…
Learning dynamic Bayesian networks with mixed variables
DEFF Research Database (Denmark)
Bøttcher, Susanne Gammelgaard
This paper considers dynamic Bayesian networks for discrete and continuous variables. We only treat the case, where the distribution of the variables is conditional Gaussian. We show how to learn the parameters and structure of a dynamic Bayesian network and also how the Markov order can be learned...
The Bayesian Revolution Approaches Psychological Development
Shultz, Thomas R.
2007-01-01
This commentary reviews five articles that apply Bayesian ideas to psychological development, some with psychology experiments, some with computational modeling, and some with both experiments and modeling. The reviewed work extends the current Bayesian revolution into tasks often studied in children, such as causal learning and word learning, and…
Bayesian Network for multiple hypthesis tracking
W.P. Zajdel; B.J.A. Kröse
2002-01-01
For a flexible camera-to-camera tracking of multiple objects we model the objects behavior with a Bayesian network and combine it with the multiple hypohesis framework that associates observations with objects. Bayesian networks offer a possibility to factor complex, joint distributions into a produ
Pharmacology of airway smooth muscle proliferation
Gosens, Reinoud; Roscioni, Sara S.; Dekkers, Bart G. J.; Pera, Tonio; Schmidt, Martina; Schaafsma, Dedmer; Zaagsma, Johan; Meurs, Herman
2008-01-01
Airway smooth muscle thickening is a pathological feature that contributes significantly to airflow limitation and airway hyperresponsiveness in asthma. Ongoing research efforts aimed at identifying the mechanisms responsible for the increased airway smooth muscle mass have indicated that hyperplasi
2nd Bayesian Young Statisticians Meeting
Bitto, Angela; Kastner, Gregor; Posekany, Alexandra
2015-01-01
The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...
BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.
Khakabimamaghani, Sahand; Ester, Martin
2016-01-01
The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.
BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.
Khakabimamaghani, Sahand; Ester, Martin
2016-01-01
The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data. PMID:26776199
Trivariate Local Lagrange Interpolation and Macro Elements of Arbitrary Smoothness
Matt, Michael Andreas
2012-01-01
Michael A. Matt constructs two trivariate local Lagrange interpolation methods which yield optimal approximation order and Cr macro-elements based on the Alfeld and the Worsey-Farin split of a tetrahedral partition. The first interpolation method is based on cubic C1 splines over type-4 cube partitions, for which numerical tests are given. The second is the first trivariate Lagrange interpolation method using C2 splines. It is based on arbitrary tetrahedral partitions using splines of degree nine. The author constructs trivariate macro-elements based on the Alfeld split, where each tetrahedron
Energy Technology Data Exchange (ETDEWEB)
Araujo, Carlos Eduardo S. [Universidade Federal de Campina Grande, PB (Brazil). Programa de Recursos Humanos 25 da ANP]. E-mail: carlos@dme.ufcg.edu.br; Silva, Rosana M. da [Universidade Federal de Campina Grande, PB (Brazil). Dept. de Matematica e Estatistica]. E-mail: rosana@dme.ufcg.edu.br
2004-07-01
This work presents an implementation of a synthetic model of a channel found in oil reservoir. The generation these models is one of the steps to the characterization and simulation of the equal probable three-dimensional geological scenery. O implemented model was obtained from fitting techniques of geometric modeling of curves and surfaces to the geological parameters (width, thickness, sinuosity and preferential direction) that defines the form to be modeled. The parameter sinuosity is related with the parameter wave length and the local amplitude of the channel, the parameter preferential direction indicates the way of the flow and the declivity of the channel. The modeling technique used to represent the surface of the channel is the sweeping technique, the consist in effectuate a translation operation from a curve along a guide curve. The guide curve, in our implementation, was generated by the interpolation of points obtained form sampled values or simulated of the parameter sinuosity, using the cubic splines of Bezier technique. A semi-ellipse, determinate by the parameter width and thickness, representing a transversal section of the channel, is the transferred curve through the guide curve, generating the channel surface. (author)
A SAS IML Macro for Loglinear Smoothing
Moses, Tim; von Davier, Alina
2011-01-01
Polynomial loglinear models for one-, two-, and higher-way contingency tables have important applications to measurement and assessment. They are essentially regarded as a smoothing technique, which is commonly referred to as loglinear smoothing. A SAS IML (SAS Institute, 2002a) macro was created to implement loglinear smoothing according to…
Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods
Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.
2012-03-01
In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.
Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza
2014-10-01
The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of
International Nuclear Information System (INIS)
We report a fully ab initio implementation of exterior complex scaling in B-splines to evaluate total, singly and triply differential cross sections in double photoionization problems. Results for He and H2 double photoionization are presented and compared with experiment
Comparison of some nonlinear smoothing methods
International Nuclear Information System (INIS)
Due to the poor quality of many nuclear medicine images, computer-driven smoothing procedures are frequently employed to enhance the diagnostic utility of these images. While linear methods were first tried, it was discovered that nonlinear techniques produced superior smoothing with little detail suppression. We have compared four methods: Gaussian smoothing (linear), two-dimensional least-squares smoothing (linear), two-dimensional least-squares bounding (nonlinear), and two-dimensional median smoothing (nonlinear). The two dimensional least-squares procedures have yielded the most satisfactorily enhanced images, with the median smoothers providing quite good images, even in the presence of widely aberrant points
Cheng, J; 10.1613/jair.764
2011-01-01
Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AIS-BN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in finite-dimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from different stages of the algorithm. We tested the performance of the AIS-BN algorithm along with two state of the art general purpose sampling algorithms, likelihood weighting (Fung and Chang...
Bayesian approaches to spatial inference: Modelling and computational challenges and solutions
Moores, Matthew; Mengersen, Kerrie
2014-12-01
We discuss a range of Bayesian modelling approaches for spatial data and investigate some of the associated computational challenges. This paper commences with a brief review of Bayesian mixture models and Markov random fields, with enabling computational algorithms including Markov chain Monte Carlo (MCMC) and integrated nested Laplace approximation (INLA). Following this, we focus on the Potts model as a canonical approach, and discuss the challenge of estimating the inverse temperature parameter that controls the degree of spatial smoothing. We compare three approaches to addressing the doubly intractable nature of the likelihood, namely pseudo-likelihood, path sampling and the exchange algorithm. These techniques are applied to satellite data used to analyse water quality in the Great Barrier Reef.
A Bayesian Reflection on Surfaces
Directory of Open Access Journals (Sweden)
David R. Wolf
1999-10-01
Full Text Available Abstract: The topic of this paper is a novel Bayesian continuous-basis field representation and inference framework. Within this paper several problems are solved: The maximally informative inference of continuous-basis fields, that is where the basis for the field is itself a continuous object and not representable in a finite manner; the tradeoff between accuracy of representation in terms of information learned, and memory or storage capacity in bits; the approximation of probability distributions so that a maximal amount of information about the object being inferred is preserved; an information theoretic justification for multigrid methodology. The maximally informative field inference framework is described in full generality and denoted the Generalized Kalman Filter. The Generalized Kalman Filter allows the update of field knowledge from previous knowledge at any scale, and new data, to new knowledge at any other scale. An application example instance, the inference of continuous surfaces from measurements (for example, camera image data, is presented.
Quantum Bayesianism at the Perimeter
Fuchs, Christopher A
2010-01-01
The author summarizes the Quantum Bayesian viewpoint of quantum mechanics, developed originally by C. M. Caves, R. Schack, and himself. It is a view crucially dependent upon the tools of quantum information theory. Work at the Perimeter Institute for Theoretical Physics continues the development and is focused on the hard technical problem of a finding a good representation of quantum mechanics purely in terms of probabilities, without amplitudes or Hilbert-space operators. The best candidate representation involves a mysterious entity called a symmetric informationally complete quantum measurement. Contemplation of it gives a way of thinking of the Born Rule as an addition to the rules of probability theory, applicable when one gambles on the consequences of interactions with physical systems. The article ends by outlining some directions for future work.
Hedging Strategies for Bayesian Optimization
Brochu, Eric; de Freitas, Nando
2010-01-01
Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It is able to do this by sampling the objective using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We describe the method, which we call GP-Hedge, and show that this method almost always outperforms the best individual acquisition function.
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
, and exercises are included for the reader to check his/her level of understanding. The techniques and methods presented for knowledge elicitation, model construction and verification, modeling techniques and tricks, learning models from data, and analyses of models have all been developed and refined......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...... primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples...
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
under uncertainty. The theory and methods presented are illustrated through more than 140 examples, and exercises are included for the reader to check his or her level of understanding. The techniques and methods presented on model construction and verification, modeling techniques and tricks, learning......Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...... sections, in addition to fully-updated examples, tables, figures, and a revised appendix. Intended primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning...
State Information in Bayesian Games
Cuff, Paul
2009-01-01
Two-player zero-sum repeated games are well understood. Computing the value of such a game is straightforward. Additionally, if the payoffs are dependent on a random state of the game known to one, both, or neither of the players, the resulting value of the game has been analyzed under the framework of Bayesian games. This investigation considers the optimal performance in a game when a helper is transmitting state information to one of the players. Encoding information for an adversarial setting (game) requires a different result than rate-distortion theory provides. Game theory has accentuated the importance of randomization (mixed strategy), which does not find a significant role in most communication modems and source coding codecs. Higher rates of communication, used in the right way, allow the message to include the necessary random component useful in games.
Multiview Bayesian Correlated Component Analysis
DEFF Research Database (Denmark)
Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai
2015-01-01
Correlated component analysis as proposed by Dmochowski, Sajda, Dias, and Parra (2012) is a tool for investigating brain process similarity in the responses to multiple views of a given stimulus. Correlated components are identified under the assumption that the involved spatial networks...... are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....
Elvira, Clément; Dobigeon, Nicolas
2015-01-01
Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as digital communications. Anti-sparse regularization can be naturally expressed through an $\\ell_{\\infty}$-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear inverse problem, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The seco...
Nonparametric Bayesian inference in biostatistics
Müller, Peter
2015-01-01
As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...
Bayesian networks in educational assessment
Almond, Russell G; Steinberg, Linda S; Yan, Duanli; Williamson, David M
2015-01-01
Bayesian inference networks, a synthesis of statistics and expert systems, have advanced reasoning under uncertainty in medicine, business, and social sciences. This innovative volume is the first comprehensive treatment exploring how they can be applied to design and analyze innovative educational assessments. Part I develops Bayes nets’ foundations in assessment, statistics, and graph theory, and works through the real-time updating algorithm. Part II addresses parametric forms for use with assessment, model-checking techniques, and estimation with the EM algorithm and Markov chain Monte Carlo (MCMC). A unique feature is the volume’s grounding in Evidence-Centered Design (ECD) framework for assessment design. This “design forward” approach enables designers to take full advantage of Bayes nets’ modularity and ability to model complex evidentiary relationships that arise from performance in interactive, technology-rich assessments such as simulations. Part III describes ECD, situates Bayes nets as ...
Income and Consumption Smoothing among US States
DEFF Research Database (Denmark)
Sørensen, Bent; Yosha, Oved
We quantify the amount of cross-sectional income and consumption smoothing achieved within subgroups of states, such as regions or clubs, e.g. the club of rich states. We find that there is much income smoothing between as well as within regions. By contrast, consumption smoothing occurs mainly...... states. The fraction of a shock to gross state products smoothed by the federal tax-transfer system is the same for various regions and other clubs of states. We calculate the scope for consumption smoothing within various regions and clubs, finding that most gains from risk sharing can be achieved...... within US regions. Since a considerable fraction of shocks to gross state product are smoothed within regions, we conclude that existing markets achieve a substantial fraction of the potential welfare gains from interstate income and consumption smoothing. Nonetheless, non-negligible welfare gains may...
Institute of Scientific and Technical Information of China (English)
侯朝胜; 李婧; 龙泉
2003-01-01
The cubic B-splines taken as trial function, the large deflection of a circular plate with arbitrarily variable thickness,as well as the buckling load, have been calculated by the method of point collocation. The support can be elastic. Loads imposed can be polynomial distributed loads, uniformly distributed radial forces or moments along the edge respectively or their combinations. Convergent solutions can still be obtained by this method under the load whose value is in great excess of normal one. Under the action of the uniformly distributed loads, linear solutions of circular plates with linearly or quadratically variable thickness are compared with those obtained by the parameter method. Buckling of a circular plate with identical thickness beyond critical thrust is compared with those obtained by the power series method.
Central-force decomposition of spline-based modified embedded atom method potential
Winczewski, S.; Dziedzic, J.; Rybicki, J.
2016-10-01
Central-force decompositions are fundamental to the calculation of stress fields in atomic systems by means of Hardy stress. We derive expressions for a central-force decomposition of the spline-based modified embedded atom method (s-MEAM) potential. The expressions are subsequently simplified to a form that can be readily used in molecular-dynamics simulations, enabling the calculation of the spatial distribution of stress in systems treated with this novel class of empirical potentials. We briefly discuss the properties of the obtained decomposition and highlight further computational techniques that can be expected to benefit from the results of this work. To demonstrate the practicability of the derived expressions, we apply them to calculate stress fields due to an edge dislocation in bcc Mo, comparing their predictions to those of linear elasticity theory.
History matching by spline approximation and regularization in single-phase areal reservoirs
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
A Note on Penalized Regression Spline Estimation in the Secondary Analysis of Case-Control Data
Gazioglu, Suzan
2013-05-25
Primary analysis of case-control studies focuses on the relationship between disease (D) and a set of covariates of interest (Y, X). A secondary application of the case-control study, often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated due to the case-control sampling, and to avoid the biased sampling that arises from the design, it is typical to use the control data only. In this paper, we develop penalized regression spline methodology that uses all the data, and improves precision of estimation compared to using only the controls. A simulation study and an empirical example are used to illustrate the methodology.
ESQUEMA FLUX-SPLINE APLICADO A PROBLEMAS DIFUSIVOS TRIDIMENSIONAIS EM REGIME PERMANENTE
Directory of Open Access Journals (Sweden)
Paulo César Oliveira
1999-12-01
Full Text Available RESUMO Este trabalho teve por finalidade apresentar um esquema de discretização mais eficiente que o tradicional esquema de diferenças centrais, denominado Flux-Spline, a fim de simular, numericamente, problemas tridimensionais governados por difusão. A eficiência do esquema proposto foi avaliada por meio da resolução de problemas-teste com solução analítica conhecida. Verificou-se que tal esquema possui características adequadas de precisão e, portanto, uma opção recomendável para a solução de problemas difusivos tridimensionais.
One Fairing Method of Cubic B-spline Curves Based on Weighted Progressive Iterative Approximation
Institute of Scientific and Technical Information of China (English)
ZHANG Li; YANG Yan; LI Yuan-yuan; TAN Jie-qing
2014-01-01
A new method to the problem of fairing planar cubic B-spline curves is introduced in this paper. The method is based on weighted progressive iterative approximation (WPIA for short) and consists of following steps:finding the bad point which needs to fair, deleting the bad point, re-inserting a new data point to keep the structure of the curve and applying WPIA method with the new set of the data points to obtain the faired curve. The new set of the data points is formed by the rest of the original data points and the new inserted point. The method can be used for shape design and data processing. Numerical examples are provided to demonstrate the effectiveness of the method.
Ionospheric scintillation modeling for high- and mid-latitude using B-spline technique
Priyadarshi, S.
2015-09-01
Ionospheric scintillation is a significant component of space-weather studies and serves as an estimate for the level of perturbation in the satellite radio wave signal caused due to small-scale ionospheric irregularities. B-spline functions are used on the GPS ground based data collected during the year 2007-2012 for modeling high- and mid-latitude ionospheric scintillation. Proposed model is for Hornsund, Svalbard and Warsaw, Poland. The input data used in this model were recorded by GSV 4004b receivers. For validation, results of this model are compared with the observation and other existing models. Physical behavior of the ionospheric scintillation during different seasons and geomagnetic conditions are discussed well. Model is found in good coherence with the ionospheric scintillation theory as well as to the accepted scintillation mechanism for high- and mid-latitude.
Vibration analysis of composite pipes using the finite element method with B-spline wavelets
Energy Technology Data Exchange (ETDEWEB)
Oke, Wasiu A.; Khulief, Yehia A. [King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia)
2016-02-15
A finite element formulation using the B-spline wavelets on the interval is developed for modeling the free vibrations of composite pipes. The composite FRP pipe element is treated as a beam element. The finite pipe element is constructed in the wavelet space and then transformed to the physical space. Detailed expressions of the mass and stiffness matrices are derived for the composite pipe using the Bspline scaling and wavelet functions. Both Euler-Bernoulli and Timoshenko beam theories are considered. The generalized eigenvalue problem is formulated and solved to obtain the modal characteristics of the composite pipe. The developed wavelet-based finite element discretization scheme utilizes significantly less elements compared to the conventional finite element method for modeling composite pipes. Numerical solutions are obtained to demonstrate the accuracy of the developed element, which is verified by comparisons with some available results in the literature.