Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping
2015-06-24
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
AnL1 smoothing spline algorithm with cross validation
Bosworth, Ken W.; Lall, Upmanu
1993-08-01
We propose an algorithm for the computation ofL1 (LAD) smoothing splines in the spacesWM(D), with . We assume one is given data of the formyiD(f(ti) +ɛi, iD1,...,N with {itti}iD1N ⊂D, theɛi are errors withE(ɛi)D0, andf is assumed to be inWM. The LAD smoothing spline, for fixed smoothing parameterλ?;0, is defined as the solution,sλ, of the optimization problem (1/N)∑iD1N yi-g(ti +λJM(g), whereJM(g) is the seminorm consisting of the sum of the squaredL2 norms of theMth partial derivatives ofg. Such an LAD smoothing spline,sλ, would be expected to give robust smoothed estimates off in situations where theɛi are from a distribution with heavy tails. The solution to such a problem is a "thin plate spline" of known form. An algorithm for computingsλ is given which is based on considering a sequence of quadratic programming problems whose structure is guided by the optimality conditions for the above convex minimization problem, and which are solved readily, if a good initial point is available. The "data driven" selection of the smoothing parameter is achieved by minimizing aCV(λ) score of the form .The combined LAD-CV smoothing spline algorithm is a continuation scheme in λ↘0 taken on the above SQPs parametrized inλ, with the optimal smoothing parameter taken to be that value ofλ at which theCV(λ) score first begins to increase. The feasibility of constructing the LAD-CV smoothing spline is illustrated by an application to a problem in environment data interpretation.
Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias
2015-01-01
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.
Spline smoothing of histograms by linear programming
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Quintic spline smooth semi-supervised support vector classification machine
Institute of Scientific and Technical Information of China (English)
Xiaodan Zhang; Jinggai Ma; Aihua Li; Ang Li
2015-01-01
A semi-supervised vector machine is a relatively new learning method using both labeled and unlabeled data in classifi-cation. Since the objective function of the model for an unstrained semi-supervised vector machine is not smooth, many fast opti-mization algorithms cannot be applied to solve the model. In order to overcome the difficulty of dealing with non-smooth objective functions, new methods that can solve the semi-supervised vector machine with desired classification accuracy are in great demand. A quintic spline function with three-times differentiability at the ori-gin is constructed by a general three-moment method, which can be used to approximate the symmetric hinge loss function. The approximate accuracy of the quintic spline function is estimated. Moreover, a quintic spline smooth semi-support vector machine is obtained and the convergence accuracy of the smooth model to the non-smooth one is analyzed. Three experiments are performed to test the efficiency of the model. The experimental results show that the new model outperforms other smooth models, in terms of classification performance. Furthermore, the new model is not sensitive to the increasing number of the labeled samples, which means that the new model is more efficient.
B-splines smoothed rejection sampling method and its applications in quasi-Monte Carlo integration
Institute of Scientific and Technical Information of China (English)
雷桂媛
2002-01-01
The rejection sampling method is one of the most popular methods used in Monte Carlo methods. It turns out that the standard rejection method is closely related to the problem of quasi-Monte Carlo integration of characteristic functions, whose accuracy may be lost due to the discontinuity of the characteristic functions. We proposed a B-splines smoothed rejection sampling method, which smoothed the characteristic function by B-splines smoothing technique without changing the integral quantity. Numerical experiments showed that the convergence rate of nearly O(N-1) is regained by using the B-splines smoothed rejection method in importance sampling.
B-splines smoothed rejection sampling method and its applications in quasi-Monte Carlo integration
Institute of Scientific and Technical Information of China (English)
雷桂媛
2002-01-01
The rejection sampling method is one of the most popular methods used in Monte Carlo methods. It turns out that the standard rejection method is closely related to the problem of quasi-Monte Carlo integration of characteristic functions, whose accuracy may be lost due to the discontinuity of the characteristic functions. We proposed a B-splines smoothed rejection sampling method, which smoothed the characteristic function by B-splines smoothing technique without changing the integral quantity. Numerical experiments showed that the convergence rate of nearly O( N-1 ) is regained by using the B-splines smoothed rejection method in importance sampling.
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data.
Age-period-cohort models using smoothing splines: a generalized additive model approach.
Jiang, Bei; Carriere, Keumhee C
2014-02-20
Age-period-cohort (APC) models are used to analyze temporal trends in disease or mortality rates, dealing with linear dependency among associated effects of age, period, and cohort. However, the nature of sparseness in such data has severely limited the use of APC models. To deal with these practical limitations and issues, we advocate cubic smoothing splines. We show that the methods of estimable functions proposed in the framework of generalized linear models can still be considered to solve the non-identifiability problem when the model fitting is within the framework of generalized additive models with cubic smoothing splines. Through simulation studies, we evaluate the performance of the cubic smoothing splines in terms of the mean squared errors of estimable functions. Our results support the use of cubic smoothing splines for APC modeling with sparse but unaggregated data from a Lexis diagram.
MortalitySmooth: An R Package for Smoothing Poisson Counts with P-Splines
Directory of Open Access Journals (Sweden)
Carlo G. Camarda
2012-07-01
Full Text Available The MortalitySmooth package provides a framework for smoothing count data in both one- and two-dimensional settings. Although general in its purposes, the package is specifically tailored to demographers, actuaries, epidemiologists, and geneticists who may be interested in using a practical tool for smoothing mortality data over ages and/or years. The total number of deaths over a specified age- and year-interval is assumed to be Poisson-distributed, and P-splines and generalized linear array models are employed as a suitable regression methodology. Extra-Poisson variation can also be accommodated.Structured in an S3 object orientation system, MortalitySmooth has two main functions which t the data and dene two classes of objects:Mort1Dsmooth and Mort2Dsmooth. The methods for these classes (print, summary, plot, predict, and residuals are also included. These features make it easy for users to extract and manipulate the outputs.In addition, a collection of mortality data is provided. This paper provides an overview of the design, aims, and principles of MortalitySmooth, as well as strategies for applying it and extending its use.
CLOSED SMOOTH SURFACE DEFINED FROM CUBIC TRIANGULAR SPLINES
Institute of Scientific and Technical Information of China (English)
Ren-zhong Feng; Ren-hong Wang
2005-01-01
In order to construct closed surfaces with continuous unit normal, we introduce a new spline space on an arbitrary closed mesh of three-sided faces. Our approach generalizes an idea of Goodman and is based on the concept of 'Geometric continuity' for piecewise polynomial parametrizations. The functions in the spline space restricted to the faces are cubic triangular polynomials. A basis of the spline space is constructed of positive functions which sum to 1. It is also shown that the space is suitable for interpolating data at the midpoints of the faces.
NEW PROOF OF DIMENSION FORMULA OF SPLINE SPACES OVER T-MESHES VIA SMOOTHING COFACTORS
Institute of Scientific and Technical Information of China (English)
Zhang-jin Huang; Jian-song Deng; Yu-yu Feng; Fa-lai Chen
2006-01-01
A T-mesh is basically a rectangular grid that allows T-junctions. Recently, Deng etal introduced splines over T-meshes, which are generalizations of T-splines invented by Sederberg etal, and proposed a dimension formula based on the B-net method. In this paper,we derive an equivalent dimension formula in a different form with the smoothing cofactor method.
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Zhang, X.; Liang, S.; Wang, G.
2015-12-01
Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.
Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.
Kainu, Annette; Timonen, Kirsi
2016-07-01
Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values.
Heyne, Matthias; Derrick, Donald
2015-12-01
Tongue surface measurements from midsagittal ultrasound scans are effectively arcs with deviations representing tongue shape, but smoothing-spline analysis of variances (SSANOVAs) assume variance around a horizontal line. Therefore, calculating SSANOVA average curves of tongue traces in Cartesian Coordinates [Davidson, J. Acoust. Soc. Am. 120(1), 407-415 (2006)] creates errors that are compounded at tongue tip and root where average tongue shape deviates most from a horizontal line. This paper introduces a method for transforming data into polar coordinates similar to the technique by Mielke [J. Acoust. Soc. Am. 137(5), 2858-2869 (2015)], but using the virtual origin of a radial ultrasound transducer as the polar origin-allowing data conversion in a manner that is robust against between-subject and between-session variability.
An Implementation of Bayesian Adaptive Regression Splines (BARS in C with S and R Wrappers
Directory of Open Access Journals (Sweden)
Garrick Wallstrom
2007-02-01
Full Text Available BARS (DiMatteo, Genovese, and Kass 2001 uses the powerful reversible-jump MCMC engine to perform spline-based generalized nonparametric regression. It has been shown to work well in terms of having small mean-squared error in many examples (smaller than known competitors, as well as producing visually-appealing fits that are smooth (filtering out high-frequency noise while adapting to sudden changes (retaining high-frequency signal. However, BARS is computationally intensive. The original implementation in S was too slow to be practical in certain situations, and was found to handle some data sets incorrectly. We have implemented BARS in C for the normal and Poisson cases, the latter being important in neurophysiological and other point-process applications. The C implementation includes all needed subroutines for fitting Poisson regression, manipulating B-splines (using code created by Bates and Venables, and finding starting values for Poisson regression (using code for density estimation created by Kooperberg. The code utilizes only freely-available external libraries (LAPACK and BLAS and is otherwise self-contained. We have also provided wrappers so that BARS can be used easily within S or R.
Brown, Charles G., Jr.; Adcock, Aaron B.; Azevedo, Stephen G.; Liebman, Judith A.; Bond, Essex J.
2011-03-01
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, timevarying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.
Energy Technology Data Exchange (ETDEWEB)
Brown, C; Adcock, A; Azevedo, S; Liebman, J; Bond, E
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.
Directory of Open Access Journals (Sweden)
Saad Bakkali
2010-04-01
Full Text Available This paper focuses on presenting a method which is able to filter out noise and suppress outliers of sampled real functions under fairly general conditions. The automatic optimal spline-smoothing approach automatically determi-nes how a cubic spline should be adjusted in a least-squares optimal sense from an a priori selection of the number of points defining an adjusting spline, but not their location on that curve. The method is fast and easily allowed for selecting several knots, thereby adding desirable flexibility to the procedure. As an illustration, we apply the AOSSA method to Moroccan resistivity data phosphate deposit “disturbances” map. The AOSSA smoothing method is an e-fficient tool in interpreting geophysical potential field data which is particularly suitable in denoising, filtering and a-nalysing resistivity data singularities. The AOSSA smoothing and filtering approach was found to be consistently use-ful when applied to modeling surface phosphate “disturbances.”.
Morrissey, Edward R; Juárez, Miguel A; Denby, Katherine J; Burroughs, Nigel J
2011-10-01
We propose a semiparametric Bayesian model, based on penalized splines, for the recovery of the time-invariant topology of a causal interaction network from longitudinal data. Our motivation is inference of gene regulatory networks from low-resolution microarray time series, where existence of nonlinear interactions is well known. Parenthood relations are mapped by augmenting the model with kinship indicators and providing these with either an overall or gene-wise hierarchical structure. Appropriate specification of the prior is crucial to control the flexibility of the splines, especially under circumstances of scarce data; thus, we provide an informative, proper prior. Substantive improvement in network inference over a linear model is demonstrated using synthetic data drawn from ordinary differential equation models and gene expression from an experimental data set of the Arabidopsis thaliana circadian rhythm.
Penalized Splines for Smooth Representation of High-dimensional Monte Carlo Datasets
Whitehorn, Nathan; Lafebre, Sven
2013-01-01
Detector response to a high-energy physics process is often estimated by Monte Carlo simulation. For purposes of data analysis, the results of this simulation are typically stored in large multi-dimensional histograms, which can quickly become both too large to easily store and manipulate and numerically problematic due to unfilled bins or interpolation artifacts. We describe here an application of the penalized spline technique to efficiently compute B-spline representations of such tables and discuss aspects of the resulting B-spline fits that simplify many common tasks in handling tabulated Monte Carlo data in high-energy physics analysis, in particular their use in maximum-likelihood fitting.
Shen, Xiang; Liu, Bin; Li, Qing-Quan
2017-03-01
The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates
Data assimilation using Bayesian filters and B-spline geological models
Duan, Lian
2011-04-01
This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.
Rate-optimal Bayesian intensity smoothing for inhomogeneous Poisson processes
E. Belitser; P. Serra; H. van Zanten
2015-01-01
We apply nonparametric Bayesian methods to study the problem of estimating the intensity function of an inhomogeneous Poisson process. To motivate our results we start by analyzing count data coming from a call center which we model as a Poisson process. This analysis is carried out using a certain
Estimating kinetic parameters in TGA using B-spline smoothing and the Friedman method
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiaojie; Preto, Fernando [CANMET Energy Technology Centre (CETC), Natural Resources (Canada); de Jong, Wiebren [Faculty 3mE, Department of Process and Energy, ET Section, Delft University of Technology, Leeghwaterstraat 44, 2628 CD Delft (Netherlands)
2009-10-15
The pyrolysis of biomass occurs via several parallel/serial decomposition reactions. The kinetic parameters, namely the activation energy (E) and the pre-exponential factor (k{sub o}), do not remain constant during the pyrolysis process. A modified empirical method is introduced for calculating the activation energy (E) and the pre-exponential factor (k{sub 0}) based on the Friedman analysis [Friedman HL. Kinetics of thermal degradation of char-forming plastics from thermogravimetry - application to a phenolic plastic. [J Polym Sci C 1963;6: 183-95]. The kinetic parameters are expressed as a function of the conversion (x) during the biomass pyrolysis process. The reactions are assumed to be of first order. At least three data sets obtained at different dynamic heating rates are required. From the Friedman analysis, the conversion (x) related functions E = E(x) and k{sub o} = k{sub o}(x) can be obtained by a B-spline regression method. The pyrolysis can hence be described as: dx/ dt=k(1-x)=k{sub o}(x). exp (-E(x)/RT)(1-x). In this paper, the adapted method is applied to pyrolysis of cellulose and two biomass fuels (meat and bone meal, chicken litter). Experiments were carried out at 2, 10 and 50 K min{sup -1} by thermogravimetric analysis. A good fit of the calculated conversion with experimental data was found. (author)
1983-09-01
nonlinear optimization problem in n -M variables but there is still the problem of choos - ing the value of the smoothing parameter. Since the conditional...method, The Annals of Statistics 10, 3. pp. 795-810. Smith. C.A.B. (1947). Some exrmples of discrimination, Annals of Eugenics 13, pp 272-282. Tapia. R. A
Knot Removing and Smoothing Method of Generalized B-Spline Curves%广义B样条曲线的节点去除与光顺算法
Institute of Scientific and Technical Information of China (English)
张莉; 葛先玉; 檀结庆
2016-01-01
广义B样条曲线具备了B样条曲线的各种优良性质，又因为其独具的核函数为形状设计带来更加丰富的可能性，文中提出了广义 B 样条曲线的节点去除与光顺算法。首先给出了构造广义 B 样条对偶基的新方法，其时间计算复杂度得到了很好的控制；其次摘除需去除的节点，再利用对偶基的最佳逼近性质，采用广义 B 样条的对偶基求得新的广义B样条曲线的控制顶点；在广义B样条曲线光顺中引入跳跃值的概念，如果某个节点附近的跳跃值较大，则去除相应的节点，从而实现在此节点处的光顺。最后通过大量的数值实例，展示了算法的有效性。%Generalized B-splines are not only compatible with classical B-splines but also provide plentiful shapes for geometric modeling systems because of their flexible core functions. The paper focuses on knots removal and smooth method of generalized B-splines. Firstly, the dual bases of generalized B-splines are constructed, and the complexity can be reduced a lot. Secondly, jump value is introduced, if jump values of some knots are too big, the corresponding knots are removed. At last, by the aid of best approximation prop-erty of the generalized B-splines’ dual bases, control points of new approximating generalized B-splines are obtained. Thus, knots removal and smooth of curves are realized. Numerical examples are given to illustrate the effectiveness of the method.
Knott, Gary D
2000-01-01
A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...
Functional Spline Curves and Surfaces with Different Degrees of Smoothness%异度隐函数样条曲线曲面
Institute of Scientific and Technical Information of China (English)
朱春钢; 李彩云; 王仁宏
2009-01-01
Implicit curves and surfaces are extensively used in interpolation, approximation and blending. By adding auxiliary curves and surfaces, the functional spline curves and surfaces with different degrees of smoothness are presented. In addition, convexity, regularity and convexity are also discussed. The examples show that the functional spline curves and surfaces provide the lower degree, simple, and flexible interpolation and blending means of curves and surfaces.%隐式曲线曲面被广泛应用于曲线曲面插值、逼近与拼接. 通过添加辅助曲线曲面,提出异度隐函数样条曲线曲面方法,并对其插值性、凸性与正则性进行分析. 具体实例表明,异度隐函数样条提供了次数低、构造简单、灵活性好的曲线曲面插值与拼接方法.
Energy Technology Data Exchange (ETDEWEB)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches.
Wei Zeng; Muhammad Razib; Abdur Bin Shahid
2015-01-01
Conventional splines offer powerful means for modeling surfaces and volumes in three-dimensional Euclidean space. A one-dimensional quaternion spline has been applied for animation purpose, where the splines are defined to model a one-dimensional submanifold in the three-dimensional Lie group. Given two surfaces, all of the diffeomorphisms between them form an infinite dimensional manifold, the so-called diffeomorphism space. In this work, we propose a novel scheme to model finite dimensional...
Directory of Open Access Journals (Sweden)
Wei Zeng
2015-04-01
Full Text Available Conventional splines offer powerful means for modeling surfaces and volumes in three-dimensional Euclidean space. A one-dimensional quaternion spline has been applied for animation purpose, where the splines are defined to model a one-dimensional submanifold in the three-dimensional Lie group. Given two surfaces, all of the diffeomorphisms between them form an infinite dimensional manifold, the so-called diffeomorphism space. In this work, we propose a novel scheme to model finite dimensional submanifolds in the diffeomorphism space by generalizing conventional splines. According to quasiconformal geometry theorem, each diffeomorphism determines a Beltrami differential on the source surface. Inversely, the diffeomorphism is determined by its Beltrami differential with normalization conditions. Therefore, the diffeomorphism space has one-to-one correspondence to the space of a special differential form. The convex combination of Beltrami differentials is still a Beltrami differential. Therefore, the conventional spline scheme can be generalized to the Beltrami differential space and, consequently, to the diffeomorphism space. Our experiments demonstrate the efficiency and efficacy of diffeomorphism splines. The diffeomorphism spline has many potential applications, such as surface registration, tracking and animation.
Inference in Varying-Coefficient Mixed Models by Using Smoothing Spline%变系数混合模型的光滑样条推断
Institute of Scientific and Technical Information of China (English)
卢一强; 许王莉
2009-01-01
Varying-coefficients mixed model (VCMM) is proposed for longitudinal data and the other correlated data. This model allows flexible functional dependence of the response variable on the covariates by using varying-coefficients linear part to present the covariates effects, while accounting the within-subject correlation by using random effect. In this article, the coefficient functions are estimated by using smoothing spline and restricted maximum likelihood is used to estimate the smoothing parameters and the variance components simultaneously. The performance of the proposed method is evaluated though some simulation studies, which show that both the coefficient functions and variance components could be estimated well for the VCMMs with all kinds of covariance structures.%为了拟合纵向数据和其他相关数据,本文提出了变系数混合效应模型(VCMM).该模型运用变系数线性部分来表示协变量对响应变量的影响,而用随机效应来描述纵向数据组内的相关性,因此,该模型允许协变量和响应变量之间存在十分灵活的泛函关系.文中运用光滑样条来估计均值部分的系数函数,而用限制最大似然的方法同时估计出光滑参数和方差成分,我们还得到了所提估计的计算方法.大量的模拟研究表明对于具有各种协方差结构的变系数混合效应模型,运用本文所提出的方法都能够十分有效地估计出模型中的系数函数和方差成分.
THE STRUCTURAL CHARACTERIZATION AND LOCALLY SUPPORTED BASES FOR BIVARIATE SUPER SPLINES
Institute of Scientific and Technical Information of China (English)
Zhi-qiang Xu; Ren-hong Wang
2004-01-01
Super splines are bivariate splines defined on triangulations, where the smoothness enforced at the vertices is larger than the smoothness enforced across the edges. In this paper, the smoothness conditions and conformality conditions for super splines are presented.Three locally supported super splines on type-1 triangulation are presented. Moreover, the criteria to select local bases is also given. By using local supported super spline function, avariation-diminishing operator is built. The approximation properties of the operator are also presented.
Zhang, Zhimin; Tomlinson, John; Martin, Clyde
1994-01-01
In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.
Institute of Scientific and Technical Information of China (English)
蔡利栋
2001-01-01
The profit-and-loss revision technique may improve the accuracy of approximation to raw image data undergone a cubic B-spline smoothing. Comments are made on this technique from the viewpoint of image smoothing and restoration, giving highlights on the equivalence between spline smoothing and diffusion smoothing, and between profit-and-loss revision and inverse diffusion restoration; formulating the revision operators into a series of renewal recursions together with an estimation to the order of their deviations from the raw data; and exposing the numerical instability of both simple and renewal recursion of the profit and loss revision. Finally, a discussion is further made on the feasibility of applying the profit-and-loss revision to edge detection for images in the presence of noise.%以三阶B-样条作数据磨光时，引入盈亏修正可以在磨光的同时提高逼近原始数据的精度.通过从图象的平滑与恢复处理的角度出发来对盈亏修正技术进行评注，并进一步阐明了样条磨光与扩散平滑、盈亏修正与反扩散恢复在离散条件下的等价关系，给出了用于修正的更新迭代算子序列以及相应的偏差阶数估计，并且指出了盈亏修正的简单迭代和更新迭代都是数值上绝对不稳定的计算；最后讨论了盈亏修正技术在图象边缘探测中的适用性.
Schwarz and multilevel methods for quadratic spline collocation
Energy Technology Data Exchange (ETDEWEB)
Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)
1994-12-31
Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.
Owusu-Edusei, Kwame; Owens, Chantelle J
2009-01-01
Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races) from the National Electronic Telecommunications System for Surveillance (NETSS) for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents) than its contiguous neighbors (195 or less) in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379). The relative change in smoothed chlamydia rates in Newton county was significantly (p < 0.05) higher than its contiguous neighbors. Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time. PMID:19245686
Directory of Open Access Journals (Sweden)
Owens Chantelle J
2009-02-01
Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.
Bayesian Smoothing with Gaussian Processes Using Fourier Basis Functions in the spectralGP Package
Directory of Open Access Journals (Sweden)
Christopher J. Paciorek
2007-04-01
Full Text Available The spectral representation of stationary Gaussian processes via the Fourier basis provides a computationally efficient specification of spatial surfaces and nonparametric regression functions for use in various statistical models. I describe the representation in detail and introduce the spectralGP package in R for computations. Because of the large number of basis coefficients, some form of shrinkage is necessary; I focus on a natural Bayesian approach via a particular parameterized prior structure that approximates stationary Gaussian processes on a regular grid. I review several models from the literature for data that do not lie on a grid, suggest a simple model modification, and provide example code demonstrating MCMC sampling using the spectralGP package. I describe reasons that mixing can be slow in certain situations and provide some suggestions for MCMC techniques to improve mixing, also with example code, and some general recommendations grounded in experience.
Institute of Scientific and Technical Information of China (English)
李小霞; 汪木兰; 刘坤; 蒋荣
2012-01-01
An interpolation method based on five degrees B-spline for manipulators trajectory planning in joint space is proposed to accomplish smooth trajectory adjustment. The B-spline interpolation algorithm can ensure the continuous of velocity, acceleration and jerk in joints moving process, also can achieve start and stop velocity, acceleration and jerk be any configuration. The manipulator trajectory is simulated out and the graphs of joints position-time, velocity-time and acceleration-time are drawn out through C + + 6. 0 development platform based on MFC framework class and OpenGL graphics library. Simulation results show that, the proposed algorithm supplied manipulators with smooth adjustment of trajectory and stable movement of joints.%为实现机械手作业轨迹平滑,关节轨迹的速度、加速度、加加速度保持连续,起始和停止的速度、加速度和加加速度可以任意配置,采用5次B样条曲线插值方法构造关节轨迹.推导了B样条曲线插值轨迹算法；通过VC ++6.0开发平台,基于MFC框架类和OpenGL图形库仿真出机械手的运动过程,并绘制出各关节的位置、速度、加速度时间曲线图.仿真结果表明,该方法使机械手关节调整平滑且运动平稳,运动性能显著优于传统的三次样条轨迹规划.
Owens Chantelle J; Owusu-Edusei Kwame
2009-01-01
Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, g...
Adaptive Parametrization of Multivariate B-splines for Image Registration
DEFF Research Database (Denmark)
Hansen, Michael Sass; Glocker, Benjamin; Navab, Nassir;
2008-01-01
cost function. In the current work we introduce multivariate B-splines as a novel alternative to the widely used tensor B-splines enabling us to make efficient use of the derived measure.The multivariate B-splines of order n are Cn- 1 smooth and are based on Delaunay configurations of arbitrary 2D or 3......D control point sets. Efficient algorithms for finding the configurations are presented, and B-splines are through their flexibility shown to feature several advantages over the tensor B-splines. In spite of efforts to make the tensor product B-splines more flexible, the knots are still bound...... to reside on a regular grid. In contrast, by efficient non- constrained placement of the knots, the multivariate B- splines are shown to give a good representation of inho- mogeneous objects in natural settings. The wide applicability of the method is illustrated through its application on medical data...
Average of Distribution and Remarks on Box-Splines
Institute of Scientific and Technical Information of China (English)
LI Yue-sheng
2001-01-01
A class of generalized moving average operators is introduced, and the integral representations of an average function are provided. It has been shown that the average of Dirac δ-distribution is just the well known box-spline. Some remarks on box-splines, such as their smoothness and the corresponding partition of unity, are made. The factorization of average operators is derived. Then, the subdivision algorithm for efficient computing of box-splines and their linear combinations follows.
Transfinite thin plate spline interpolation
Bejancu, Aurelian
2009-01-01
Duchon's method of thin plate splines defines a polyharmonic interpolant to scattered data values as the minimizer of a certain integral functional. For transfinite interpolation, i.e. interpolation of continuous data prescribed on curves or hypersurfaces, Kounchev has developed the method of polysplines, which are piecewise polyharmonic functions of fixed smoothness across the given hypersurfaces and satisfy some boundary conditions. Recently, Bejancu has introduced boundary conditions of Beppo Levi type to construct a semi-cardinal model for polyspline interpolation to data on an infinite set of parallel hyperplanes. The present paper proves that, for periodic data on a finite set of parallel hyperplanes, the polyspline interpolant satisfying Beppo Levi boundary conditions is in fact a thin plate spline, i.e. it minimizes a Duchon type functional.
The Design and Analysis of a Smooth-Walled Spline-Profile Horn%曲线轮廓光壁喇叭天线设计与分析
Institute of Scientific and Technical Information of China (English)
刘广; 王宏建; 张升伟; 程岳云; 易敏; 陈雪; 刘世华
2011-01-01
At millimeter-wave frequencies, corrugated horns are difficult and expensive to manufacture. As an alternative, a smooth-walled spline-profile horn constituted by sin-profile and Gaussian-profile is presented in this thesis. Its electrical property was calculated using commercially available finite element software: high frequency structure simulator (HFSS) of ANSOFT. The Gaussian coupling efficiency of the horn was calculated using quasi-optical theory. The designed horn was measured using high frequency planar near-field antenna testing system. Testing results indicate that, compared with corrugated horn, the sidelobe of smoothwalled spline-profile horn decreased 6.59 dB, its gain increased 0. 5 dB, Gaussian coupling efficiency increased 0.9％, total length shortened 15.8 mm and it was much easier to manufacture. It seems that the presented horn has a high performance and it can be used as a feed source of antenna in stead of corrugated horn.%针对毫米波段波纹喇叭加工困难、加工费用昂贵的问题,提出一种由正弦曲线和高斯曲线组成的曲线轮廓光壁喇叭天线,采用基于有限元算法的高频电磁场仿真软件ANSOFT HFSS对该天线的电特性进行计算,并运用准光理论计算其高斯耦合效率.采用高频平面近场天线测试系统对喇叭实物进行测试,实测结果与波纹喇叭对比,这种曲线轮廓光壁喇叭天线的副瓣电平比波纹喇叭低6.59 dB,增益高0.5 dB,高斯耦合效率高0.9%,长度短15.8 mm,而且易于加工,是一种可替代波纹喇叭的优良性能馈源.
LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA
Directory of Open Access Journals (Sweden)
T. Dokken
2015-08-01
Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.
Energy Technology Data Exchange (ETDEWEB)
2013-08-29
An analytical model is developed to evaluate the design of a spline coupling. For a given torque and shaft misalignment, the model calculates the number of teeth in contact, tooth loads, stiffnesses, stresses, and safety factors. The analytic model provides essential spline coupling design and modeling information and could be easily integrated into gearbox design and simulation tools.
Institute of Scientific and Technical Information of China (English)
2009-01-01
In this paper, we study the local asymptotic behavior of the regression spline estimator in the framework of marginal semiparametric model. Similarly to Zhu, Fung and He (2008), we give explicit expression for the asymptotic bias of regression spline estimator for nonparametric function f. Our results also show that the asymptotic bias of the regression spline estimator does not depend on the working covariance matrix, which distinguishes the regression splines from the smoothing splines and the seemingly unrelated kernel. To understand the local bias result of the regression spline estimator, we show that the regression spline estimator can be obtained iteratively by applying the standard weighted least squares regression spline estimator to pseudo-observations. At each iteration, the bias of the estimator is unchanged and only the variance is updated.
Smooth GERBS, orthogonal systems and energy minimization
Dechevsky, Lubomir T.; Zanaty, Peter
2013-12-01
New results are obtained in three mutually related directions of the rapidly developing theory of generalized expo-rational B-splines (GERBS) [7, 6]: closed-form computability of C∞-smooth GERBS in terms of elementary and special functions, Hermite interpolation and least-squares best approximation via smooth GERBS, energy minimizing properties of smooth GERBS similar to those of the classical cubic polynomial B-splines.
THE INSTABILITY DEGREE IN THE DIEMNSION OF SPACES OF BIVARIATE SPLINE
Institute of Scientific and Technical Information of China (English)
Zhiqiang Xu; Renhong Wang
2002-01-01
In this paper, the dimension of the spaces of bivariate spline with degree less that 2r and smoothness order r on the Morgan-Scott triangulation is considered. The concept of the instability degree in the dimension of spaces of bivariate spline is presented. The results in the paper make us conjecture the instability degree in the dimension of spaces of bivariate spline is infinity.
Single authentication: exposing weighted splining artifacts
Ciptasari, Rimba W.
2016-05-01
A common form of manipulation is to combine parts of the image fragment into another different image either to remove or blend the objects. Inspired by this situation, we propose a single authentication technique for detecting traces of weighted average splining technique. In this paper, we assume that image composite could be created by joining two images so that the edge between them is imperceptible. The weighted average technique is constructed from overlapped images so that it is possible to compute the gray level value of points within a transition zone. This approach works on the assumption that although splining process leaves the transition zone smoothly. They may, nevertheless, alter the underlying statistics of an image. In other words, it introduces specific correlation into the image. The proposed idea dealing with identifying these correlations is to generate an original model of both weighting function, left and right functions, as references to their synthetic models. The overall process of the authentication is divided into two main stages, which are pixel predictive coding and weighting function estimation. In the former stage, the set of intensity pairs {Il,Ir} is computed by exploiting pixel extrapolation technique. The least-squares estimation method is then employed to yield the weighted coefficients. We show the efficacy of the proposed scheme on revealing the splining artifacts. We believe that this is the first work that exposes the image splining artifact as evidence of digital tampering.
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol; Erleben, Kenny
dimensional 2D/3D deformable model. Our activation splines are easy to set up and can be used for physics based animation of deformable models such as snake motion and locomotion of characters. Our approach generalises easily to both 2D and 3D simulations and is applicable in physics based games or animations...
Interchangeable spline reference guide
Energy Technology Data Exchange (ETDEWEB)
Dolin, R.M.
1994-05-01
The WX-Division Integrated Software Tools (WIST) Team evolved from two previous committees, First was the W78 Solid Modeling Pilot Project`s Spline Subcommittee, which later evolved into the Vv`X-Division Spline Committee. The mission of the WIST team is to investigate current CAE engineering processes relating to complex geometry and to develop methods for improving those processes. Specifically, the WIST team is developing technology that allows the Division to use multiple spline representations. We are also updating the contour system (CONSYS) data base to take full advantage of the Division`s expanding electronic engineering process. Both of these efforts involve developing interfaces to commercial CAE systems and writing new software. The WIST team is comprised of members from V;X-11, -12 and 13. This {open_quotes}cross-functional{close_quotes} approach to software development is somewhat new in the Division so an effort is being made to formalize our processes and assure quality at each phase of development. Chapter one represents a theory manual and is one phase of the formal process. The theory manual is followed by a software requirements document, specification document, software verification and validation documents. The purpose of this guide is to present the theory underlying the interchangeable spline technology and application. Verification and validation test results are also presented for proof of principal.
1981-05-01
try todefine a complex planar spline by holomorphic elements like polynomials, then by the well known identity theorem (e.g. Diederich- Remmert [9, p...R. Remmert : Funktionentheorie I, Springer, Berlin, Heidelberg, New York, 1972, 246 p. 10 0. Lehto - K.I. Virtanen: Quasikonforme AbbildunQen, Springer
Norton, Andrew H.
1991-01-01
Local spline approximants offer a means for constructing finite difference formulae for numerical solution of PDEs. These formulae seem particularly well suited to situations in which the use of conventional formulae leads to non-linear computational instability of the time integration. This is explained in terms of frequency responses of the FDF.
Some extremal properties of multivariate polynomial splines in the metric Lp (Rd )
Institute of Scientific and Technical Information of China (English)
刘永平; 许贵桥
2001-01-01
We constructed a kind of continuous multivariate spline operators as the approximation tools of the multivariate functions on the Bd instead of the usual multivariate cardinal interpolation oper-ators of splines, and obtained the approximation error by this kind of spline operators. Meantime, by the results, we also obtained that the spaces of multivariate polynomial splines are weakly asymptoti-cally optimal for the Kolmogorov widths and the linear widths of some anisotropic Sobolev classes of smooth functions on Bd in the metric Lp(Bd).
Connecting the Dots Parametrically: An Alternative to Cubic Splines.
Hildebrand, Wilbur J.
1990-01-01
Discusses a method of cubic splines to determine a curve through a series of points and a second method for obtaining parametric equations for a smooth curve that passes through a sequence of points. Procedures for determining the curves and results of each of the methods are compared. (YP)
Institute of Scientific and Technical Information of China (English)
吴泽福
2012-01-01
Based on the comparision of basic static estimate methods of term structure of interest rate (TSIR), we improved B-spline function estimate method, which involved optimization on estimation programmes, node numbers choice, and node placement design. To overcome the subjective effect of B-spline node distribution and C2 smoothness condition of discount function, we introduced negative exponential smoothness cubic Li-spline optimization technology with minimum constraint function of estimation error from quadratic sum to absolute value and minimum volatility of discount function, to increase the estimation reliability and prediction ability of short-term interest rate's volatility structure mutation, improve the advantage on depicting the long-term interest rate volatility trend, and reduce the excessive volatility of discount function.%通过对比国内外利率期限结构静态估计模型的优劣,分析节点数目变化和定位改进B样条函数对利率期限结构静态估计的误差,构建最小化定价误差的节点组合布局搜索程序,并引入负指数平滑立方L1样条优化模型,将误差函数最小化结构从平方和最小化转化为误差距离最小化,权衡拟合误差绝对距离最小化与贴现函数波动性约束,克服B样条函数对节点数目与定位的人工干预和放宽对贴现函数的二阶平滑要求,保留B样条函数刻画中长期利率波动趋势的优势,增强对短期利率波动结构突变的估计和预测能力,提高定价精确度和缓解利率期限结构曲线的过度波动问题.
Generalized fairing algorithm of parametric cubic splines
Institute of Scientific and Technical Information of China (English)
WANG Yuan-jun; CAO Yuan
2006-01-01
Kjellander has reported an algorithm for fairing uniform parametric cubic splines. Poliakoff extended Kjellander's algorithm to non-uniform case. However, they merely changed the bad point's position, and neglected the smoothing of tangent at bad point. In this paper, we present a fairing algorithm that both changed point's position and its corresponding tangent vector. The new algorithm possesses the minimum property of energy. We also proved Poliakoff's fairing algorithm is a deduction of our fairing algorithm. Several fairing examples are given in this paper.
Directory of Open Access Journals (Sweden)
S. Abhishek
2016-07-01
Full Text Available It is well understood that in any data acquisition system reduction in the amount of data reduces the time and energy, but the major trade-off here is the quality of outcome normally, lesser the amount of data sensed, lower the quality. Compressed Sensing (CS allows a solution, for sampling below the Nyquist rate. The challenging problem of increasing the reconstruction quality with less number of samples from an unprocessed data set is addressed here by the use of representative coordinate selected from different orders of splines. We have made a detailed comparison with 10 orthogonal and 6 biorthogonal wavelets with two sets of data from MIT Arrhythmia database and our results prove that the Spline coordinates work better than the wavelets. The generation of two new types of splines such as exponential and double exponential are also briefed here .We believe that this is one of the very first attempts made in Compressed Sensing based ECG reconstruction problems using raw data.
Hilbertian kernels and spline functions
Atteia, M
1992-01-01
In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.
Splines and variational methods
Prenter, P M
2008-01-01
One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension
Straight-sided Spline Optimization
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2011-01-01
and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using design modifications, that do not change the spline load carrying capacity, it is shown that large...
Mathematical research on spline functions
Horner, J. M.
1973-01-01
One approach in spline functions is to grossly estimate the integrand in J and exactly solve the resulting problem. If the integrand in J is approximated by Y" squared, the resulting problem lends itself to exact solution, the familiar cubic spline. Another approach is to investigate various approximations to the integrand in J and attempt to solve the resulting problems. The results are described.
On Characterization of Quadratic Splines
DEFF Research Database (Denmark)
Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong
2005-01-01
that the representation can be refined in a neighborhood of a non-degenerate point and a set of non-degenerate minimizers. Based on these characterizations, many existing algorithms for specific convex quadratic splines are also finite convergent for a general convex quadratic spline. Finally, we study the relationship...
Free-Form Deformation with Rational DMS-Spline Volumes
Institute of Scientific and Technical Information of China (English)
Gang Xu; Guo-Zhao Wang; Xiao-Diao Chen
2008-01-01
In this paper, we propose a novel free-form deformation (FFD) technique, RDMS-FFD (Rational DMS-FFD),based on rational DMS-spline volumes. RDMS-FFD inherits some good properties of rational DMS-spline volumes and combines more deformation techniques than previous FFD methods in a consistent framework, such as local deformation,control lattice of arbitrary topology, smooth deformation, multiresolution deformation and direct manipulation of deforma-tion. We first introduce the rational DMS-spline volume by directly generalizing the previous results related to DMS-splies.How to generate a tetrahedral domain that approximates the shape of the object to be deformed is also introduced in this paper. Unlike the traditional FFD techniques, we manipulate the vertices of the tetrahedral domain to achieve deformation results. Our system demonstrates that RDMS-FFD is powerful and intuitive in geometric modeling.
Fast space-variant elliptical filtering using box splines
Chaudhury, Kunal Narayan; Unser, Michael
2010-01-01
The efficient realization of linear space-variant (non-convolution) filters is a challenging computational problem in image processing. In this paper, we demonstrate that it is possible to filter an image with a Gaussian-like elliptic window of varying size, elongation and orientation using a fixed number of computations per pixel. The associated algorithm, which is based on a family of smooth compactly supported piecewise polynomials, the radially-uniform box splines, is realized using pre-integration and local finite-differences. The radially-uniform box splines are constructed through the repeated convolution of a fixed number of box distributions, which have been suitably scaled and distributed radially in an uniform fashion. The attractive features of these box splines are their asymptotic behavior, their simple covariance structure, and their quasi-separability. They converge to Gaussians with the increase of their order, and are used to approximate anisotropic Gaussians of varying covariance simply by ...
Theory, computation, and application of exponential splines
Mccartin, B. J.
1981-01-01
A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.
Penalized Spline: a General Robust Trajectory Model for ZIYUAN-3 Satellite
Pan, H.; Zou, Z.
2016-06-01
Owing to the dynamic imaging system, the trajectory model plays a very important role in the geometric processing of high resolution satellite imagery. However, establishing a trajectory model is difficult when only discrete and noisy data are available. In this manuscript, we proposed a general robust trajectory model, the penalized spline model, which could fit trajectory data well and smooth noise. The penalized parameter λ controlling the smooth and fitting accuracy could be estimated by generalized cross-validation. Five other trajectory models, including third-order polynomials, Chebyshev polynomials, linear interpolation, Lagrange interpolation and cubic spline, are compared with the penalized spline model. Both the sophisticated ephemeris and on-board ephemeris are used to compare the orbit models. The penalized spline model could smooth part of noise, and accuracy would decrease as the orbit length increases. The band-to-band misregistration of ZiYuan-3 Dengfeng and Faizabad multispectral images is used to evaluate the proposed method. With the Dengfeng dataset, the third-order polynomials and Chebyshev approximation could not model the oscillation, and introduce misregistration of 0.57 pixels misregistration in across-track direction and 0.33 pixels in along-track direction. With the Faizabad dataset, the linear interpolation, Lagrange interpolation and cubic spline model suffer from noise, introducing larger misregistration than the approximation models. Experimental results suggest the penalized spline model could model the oscillation and smooth noise.
Spline Histogram Method for Reconstruction of Probability Density Functions of Clusters of Galaxies
Docenko, Dmitrijs; Berzins, Karlis
We describe the spline histogram algorithm which is useful for visualization of the probability density function setting up a statistical hypothesis for a test. The spline histogram is constructed from discrete data measurements using tensioned cubic spline interpolation of the cumulative distribution function which is then differentiated and smoothed using the Savitzky-Golay filter. The optimal width of the filter is determined by minimization of the Integrated Square Error function. The current distribution of the TCSplin algorithm written in f77 with IDL and Gnuplot visualization scripts is available from www.virac.lv/en/soft.html.
Spline histogram method for reconstruction of probability density function of clusters of galaxies
Docenko, D; Docenko, Dmitrijs; Berzins, Karlis
2003-01-01
We describe the spline histogram algorithm which is useful for visualization of the probability density function setting up a statistical hypothesis for a test. The spline histogram is constructed from discrete data measurements using tensioned cubic spline interpolation of the cumulative distribution function which is then differentiated and smoothed using the Savitzky-Golay filter. The optimal width of the filter is determined by minimization of the Integrated Square Error function. The current distribution of the TCSplin algorithm written in f77 with IDL and Gnuplot visualization scripts is available from http://www.virac.lv/en/soft.html
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Complexity of Approximation by Conic Splines (Extended Abstract)
Petitjean, Sylvain; Ghosh, Sunayana; Vegter, Gert
2007-01-01
In this paper we show that the complexity, i.e., the number of elements, of a parabolic or conic spline approximating a sufficiently smooth curve with non-vanishing curvature to within Hausdorff distance ε is c1ε^−1/4 + O(1), or c2ε^−1/5 + O(1), respectively. The constants c1 and c2 are expressed in
Institute of Scientific and Technical Information of China (English)
朱慧明; 周峰; 曾昭法; 李荣; 游万海
2015-01-01
In the method of testing smooth transition cointegration, estimating parameters are uncertain and the problem of cointegration test is complex.This paper proposes a smooth transition regression model and conducts a Bayesian nonlinear cointegration analysis.Based on the selection of parameters prior of the model and the charac-teristics of the posterior conditional distributions of the parameters, Metropolis-Hasting within Gibbs sampling algorithm is designed to estimate the parameters and bayesian unit root test is utilized to test the stationarity of regression residual, addressing the uncertainty of parameters estimation and the complexity of cointegration test. At the same time, the research applies exchange rate of RMB against U.S.dollar and interest rate differential between China and U.S.to conduct an empirical analysis.The research outcome indicates that MH-Gibbs can effectively a estimate the parameters of the smooth transition model, and we find there is smooth transition cointe-gration relationship between exchange rate fluctuation and interest rate differential.%针对平滑转移模型参数估计不确定性导致的协整检验方法相对复杂问题，提出基于平滑转移模型的贝叶斯非线性协整分析。通过模型的统计结构分析，选择参数先验分布，结合参数的后验条件分布特征设计Me-tropolis-Hasting-Gibbs混合抽样方案，据此估计平滑转移模型的参数，并对回归残差进行贝叶斯单位根检验，解决参数估计过程中遇到的参数估计不确定性及协整检验复杂的问题；利用人民币对美元汇率与中美两国的利率数据进行实证分析。研究结果表明：MH-Gibbs抽样方案能够有效估计平滑转移模型的参数，中美汇率波动和利差之间存在平滑转移协整关系。
Bayesian geostatistical modeling of Malaria Indicator Survey data in Angola.
Directory of Open Access Journals (Sweden)
Laura Gosoniu
Full Text Available The 2006-2007 Angola Malaria Indicator Survey (AMIS is the first nationally representative household survey in the country assessing coverage of the key malaria control interventions and measuring malaria-related burden among children under 5 years of age. In this paper, the Angolan MIS data were analyzed to produce the first smooth map of parasitaemia prevalence based on contemporary nationwide empirical data in the country. Bayesian geostatistical models were fitted to assess the effect of interventions after adjusting for environmental, climatic and socio-economic factors. Non-linear relationships between parasitaemia risk and environmental predictors were modeled by categorizing the covariates and by employing two non-parametric approaches, the B-splines and the P-splines. The results of the model validation showed that the categorical model was able to better capture the relationship between parasitaemia prevalence and the environmental factors. Model fit and prediction were handled within a Bayesian framework using Markov chain Monte Carlo (MCMC simulations. Combining estimates of parasitaemia prevalence with the number of children under we obtained estimates of the number of infected children in the country. The population-adjusted prevalence ranges from in Namibe province to in Malanje province. The odds of parasitaemia in children living in a household with at least ITNs per person was by 41% lower (CI: 14%, 60% than in those with fewer ITNs. The estimates of the number of parasitaemic children produced in this paper are important for planning and implementing malaria control interventions and for monitoring the impact of prevention and control activities.
Examination of the Circle Spline Routine
Dolin, R. M.; Jaeger, D. L.
1985-01-01
The Circle Spline routine is currently being used for generating both two and three dimensional spline curves. It was developed for use in ESCHER, a mesh generating routine written to provide a computationally simple and efficient method for building meshes along curved surfaces. Circle Spline is a parametric linear blending spline. Because many computerized machining operations involve circular shapes, the Circle Spline is well suited for both the design and manufacturing processes and shows promise as an alternative to the spline methods currently supported by the Initial Graphics Specification (IGES).
Comparison of CSC method and the B-net method for deducing smoothness condition
Institute of Scientific and Technical Information of China (English)
Renhong Wang; Kai Qu
2009-01-01
The first author of this paper established an approach to study the multivariate spline over arbitrary partition,and presented the so-called conformality method of smoothing cofactor (the CSC method).Farin introduced the B-net method which is suitable for studying the multivariate spline over simplex partitions.This paper indicates that the smoothness conditions obtained in terms of the B-net method can be derived by the CSC method for the spline spaces over simplex partitions,and the CSC method is more capable in some sense than the B-net method in studying the multivariate spline.
Isogeometric analysis using T-splines
Bazilevs, Yuri
2010-01-01
We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.
Numerical Methods Using B-Splines
Shariff, Karim; Merriam, Marshal (Technical Monitor)
1997-01-01
The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.
Conformal interpolating algorithm based on B-spline for aspheric ultra-precision machining
Li, Chenggui; Sun, Dan; Wang, Min
2006-02-01
Numeric control machining and on-line compensation for aspheric surface are key techniques for ultra-precision machining. In this paper, conformal cubic B-spline interpolating curve is first applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal B-spline interpolation, comparison was made between linear and circular interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by B-spline is higher than by line and by circle arc. The algorithm is benefit to increasing the surface form precision of workpiece during ultra-precision machining.
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
MacNab, Ying C
2007-11-20
This paper presents a Bayesian disability-adjusted life year (DALY) methodology for spatial and spatiotemporal analyses of disease and/or injury burden. A Bayesian disease mapping model framework, which blends together spatial modelling, shared-component modelling (SCM), temporal modelling, ecological modelling, and non-linear modelling, is developed for small-area DALY estimation and inference. In particular, we develop a model framework that enables SCM as well as multivariate CAR modelling of non-fatal and fatal disease or injury rates and facilitates spline smoothing for non-linear modelling of temporal rate and risk trends. Using British Columbia (Canada) hospital admission-separation data and vital statistics mortality data on non-fatal and fatal road traffic injuries to male population age 20-39 for year 1991-2000 and for 84 local health areas and 16 health service delivery areas, spatial and spatiotemporal estimation and inference on years of life lost due to premature death, years lived with disability, and DALYs are presented. Fully Bayesian estimation and inference, with Markov chain Monte Carlo implementation, are illustrated. We present a methodological framework within which the DALY and the Bayesian disease mapping methodologies interface and intersect. Its development brings the relative importance of premature mortality and disability into the assessment of community health and health needs in order to provide reliable information and evidence for community-based public health surveillance and evaluation, disease and injury prevention, and resource provision.
HILBERTIAN APPROACH FOR UNIVARIATE SPLINE WITH TENSION
Institute of Scientific and Technical Information of China (English)
A.Bouhamidi
2001-01-01
In this work,a new approach is proposed for constructing splines with tension.The basic idea is in the use of distributions theory,which allows us to define suitable Hilbert spaces in which the tension spline minimizes some energy functional.Classical orthogonal conditions and characterizations of the spline in terms of a fundamental solution of a differential operator are provided.An explicit representation of the tension spline is given.The tension spline can be computed by solving a linear system.Some numerical examples are given to illustrate this approach.
Optimization of straight-sided spline design
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2011-01-01
and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using different design modifications, that do not change the spline load carrying capacity, it is shown......Spline connection of shaft and hub is commonly applied when large torque capacity is needed together with the possibility of disassembly. The designs of these splines are generally controlled by different standards. In view of the common use of splines, it seems that few papers deal with splines...... that large reductions in the maximum stress are possible. Fatigue life of a spline can be greatly improved with up to a 25% reduction in the maximum stress level. Design modifications are given as simple analytical functions (modified super elliptical shape) with only two active design parameters...
Bayesian inference of local geomagnetic secular variation curves: application to archaeomagnetism
Lanos, Philippe
2014-05-01
The errors that occur at different stages of the archaeomagnetic calibration process are combined using a Bayesian hierarchical modelling. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are generally more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we propose allows all these observations and errors to be linked together thanks to appropriate prior probability densities. The model also includes penalized cubic splines for estimating the univariate, spherical or three-dimensional curves for the secular variation of the geomagnetic field (inclination, declination, intensity) over time at a local place. The mean smooth curve we obtain, with its posterior Bayesian envelop provides an adaptation to the effects of variability in the density of reference points over time. Moreover, the hierarchical modelling also allows an efficient way to penalize outliers automatically. With this new posterior estimate of the curve, the Bayesian statistical framework then allows to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in the same way as those that arise from radiocarbon dating. In order to illustrate the model and the inference method used, we will present results based on French, Bulgarian and Austrian datasets recently published.
Spline and spline wavelet methods with applications to signal and image processing
Averbuch, Amir Z; Zheludev, Valery A
This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...
Directory of Open Access Journals (Sweden)
Ilyasov R. H.
2014-10-01
Full Text Available The energy market shows strong exposure to seasonal fluctuations. A striking example of the impact of seasonality is the dynamics of the production of natural and associated gas in Russia. We use two approaches to the identification and analysis of seasonality: classical econometric based on different smoothing procedure; spline method uses an approximation of the economic dynamics of cubic splines and phase analysis. In the comparison of the two methods are used to identify the benefits of using spline functions when modeling economic dynamics and phase analysis of seasonality
Triangular bubble spline surfaces.
Kapl, Mario; Byrtus, Marek; Jüttler, Bert
2011-11-01
We present a new method for generating a [Formula: see text]-surface from a triangular network of compatible surface strips. The compatible surface strips are given by a network of polynomial curves with an associated implicitly defined surface, which fulfill certain compatibility conditions. Our construction is based on a new concept, called bubble patches, to represent the single surface patches. The compatible surface strips provide a simple [Formula: see text]-condition between two neighboring bubble patches, which are used to construct surface patches, connected with [Formula: see text]-continuity. For [Formula: see text], we describe the obtained [Formula: see text]-condition in detail. It can be generalized to any [Formula: see text]. The construction of a single surface patch is based on Gordon-Coons interpolation for triangles.Our method is a simple local construction scheme, which works uniformly for vertices of arbitrary valency. The resulting surface is a piecewise rational surface, which interpolates the given network of polynomial curves. Several examples of [Formula: see text], [Formula: see text] and [Formula: see text]-surfaces are presented, which have been generated by using our method. The obtained surfaces are visualized with reflection lines to demonstrate the order of smoothness.
PENALIZED SPLINE: A GENERAL ROBUST TRAJECTORY MODEL FOR ZIYUAN-3 SATELLITE
Pan, H; Zou, Z
2016-01-01
Owing to the dynamic imaging system, the trajectory model plays a very important role in the geometric processing of high resolution satellite imagery. However, establishing a trajectory model is difficult when only discrete and noisy data are available. In this manuscript, we proposed a general robust trajectory model, the penalized spline model, which could fit trajectory data well and smooth noise. The penalized parameter λ controlling the smooth and fitting accuracy could be estimated by ...
Ng-Thow-Hing, Victor; Agur, Anne; Ball, Kevin A.; Fiume, Eugene; McKee, Nancy
1998-05-01
We introduce a mathematical primitive called the B-spline solid that can be used to create deformable models of muscle shape. B-spline solids can be used to model skeletal muscle for the purpose of building a data library of reusable, deformable muscles that are reconstructed from actual muscle data. Algorithms are provided for minimizing shape distortions that may be caused when fitting discrete sampled data to a continuous B-spline solid model. Visible Human image data provides a good indication of the perimeter of a muscle, but is not suitable for providing internal muscle fiber bundle arrangements which are important for physical simulation of muscle function. To obtain these fiber bundle orientations, we obtain 3-D muscle fiber bundle coordinates by triangulating optical images taken from three different camera views of serially dissected human soleus specimens. B-spline solids are represented as mathematical three-dimensional vector functions which can parameterize an enclosed volume as well as its boundary surface. They are based on B-spline basis functions, allowing local deformations via adjustable control points and smooth continuity of shape. After the B-spline solid muscle model is fitted with its external surface and internal volume arrangements, we can subsequently deform its shape to allow simulation of animated muscle tissue.
Quadrotor system identification using the multivariate multiplex b-spline
Visser, T.; De Visser, C.C.; Van Kampen, E.J.
2015-01-01
A novel method for aircraft system identification is presented that is based on a new multivariate spline type; the multivariate multiplex B-spline. The multivariate multiplex B-spline is a generalization of the recently introduced tensor-simplex B-spline. Multivariate multiplex splines obtain simil
Bayesian modeling using WinBUGS
Ntzoufras, Ioannis
2009-01-01
A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...
Ryu, Duchwan; Li, Erning; Mallick, Bani K
2011-06-01
We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves.
Ryu, Duchwan
2010-09-28
We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.
Directory of Open Access Journals (Sweden)
Are eLosnegård
2013-07-01
Full Text Available Image-based tractography of white matter (WM fiber bundles in the brain using diffusion weighted MRI (DW-MRI has become a useful tool in basic and clinical neuroscience. However, proper tracking is challenging due to the anatomical complexity of fiber pathways, the coarse resolution of clinically applicable whole-brain in vivo imaging techniques, and the difficulties associated with verification. In this study we introduce a new tractography algorithm using splines (denoted Spline. Spline reconstructs smooth fiber trajectories iteratively, in contrast to most other tractography algorithms that create piecewise linear fiber tract segments, followed by spline fitting. Using DW-MRI recordings from eight healthy elderly people participating in a longitudinal study of cognitive aging, we compare our Spline algorithm to two state-of-the-art tracking methods from the TrackVis software suite. The comparison is done quantitatively using diffusion metrics (fractional anisotropy, FA, with both (i tract averaging, (ii longitudinal linear mixed-effects model fitting, and (iii detailed along-tract analysis. Further validation is done on recordings from a diffusion hardware phantom, mimicking a coronal brain slice, with a known ground truth. Results from the longitudinal aging study showed high sensitivity of Spline tracking to individual aging patterns of mean FA when combined with linear mixed-effects modelling, moderately strong differences in the along-tract analysis of specific tracts, whereas the tract-averaged comparison using simple linear OLS regression revealed less differences between Spline and the two other tractography algorithms. In the brain phantom experiments with a ground truth, we demonstrated improved tracking ability of Spline compared to the two reference tractography algorithms being tested.
2-rational Cubic Spline Involving Tension Parameters
Indian Academy of Sciences (India)
M Shrivastava; J Joseph
2000-08-01
In the present paper, 1-piecewise rational cubic spline function involving tension parameters is considered which produces a monotonic interpolant to a given monotonic data set. It is observed that under certain conditions the interpolant preserves the convexity property of the data set. The existence and uniqueness of a 2-rational cubic spline interpolant are established. The error analysis of the spline interpolant is also given.
Conformal Solid T-spline Construction from Boundary T-spline Representations
2012-07-01
idea of isogeo- metric analysis [6, 2], one challenge is to automatically cre- ate a conformal solid NURBS /T-spline model with the given spline...solid NURBS construction method for patient-specific vas- cular geometric models was presented. In [1], a swept vol- ume parameterization was built for...representations. A general methodology for constructing a conformal solid T-spline from boundary T-spline/ NURBS representations is 2 Yongjie Zhang et al. (a
Inference in dynamic systems using B-splines and quasilinearized ODE penalties.
Frasso, Gianluca; Jaeger, Jonathan; Lambert, Philippe
2016-05-01
Nonlinear (systems of) ordinary differential equations (ODEs) are common tools in the analysis of complex one-dimensional dynamic systems. We propose a smoothing approach regularized by a quasilinearized ODE-based penalty. Within the quasilinearized spline-based framework, the estimation reduces to a conditionally linear problem for the optimization of the spline coefficients. Furthermore, standard ODE compliance parameter(s) selection criteria are applicable. We evaluate the performances of the proposed strategy through simulated and real data examples. Simulation studies suggest that the proposed procedure ensures more accurate estimates than standard nonlinear least squares approaches when the state (initial and/or boundary) conditions are not known.
Noorkojuri, Hoda; Hajizadeh, Ebrahim; Baghestani, Ahmadreza; Pourhoseingholi, Mohamadamin
2013-01-01
Background Smoothing methods are widely used to analyze epidemiologic data, particularly in the area of environmental health where non-linear relationships are not uncommon. This study focused on three different smoothing methods in Cox models: penalized splines, restricted cubic splines and fractional polynomials. Objectives The aim of this study was to assess the effects of prognostic factors on survival of patients with gastric cancer using the smoothing methods in Cox model and Cox propor...
Algorithms for spline and other approximations to functions and data
Phillips, G. M.; Taylor, P. J.
1992-12-01
A succinct introduction to splines, explaining how and why B-splines are used as a basis and how cubic and quadratic splines may be constructed, is followed by brief account of Hermite interpolation and Padé approximations.
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
Trajectory control of an articulated robot with a parallel drive arm based on splines under tension
Yi, Seung-Jong
Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and
Uniform trigonometric polynomial B-spline curves
Institute of Scientific and Technical Information of China (English)
吕勇刚; 汪国昭; 杨勋年
2002-01-01
This paper presents a new kind of uniform spline curve, named trigonometric polynomialB-splines, over space Ω = span{sint, cost, tk-3,tk-4,…,t,1} of which k is an arbitrary integerlarger than or equal to 3. We show that trigonometric polynomial B-spline curves have many similarV properties to traditional B-splines. Based on the explicit representation of the curve we have also presented the subdivision formulae for this new kind of curve. Since the new spline can include both polynomial curves and trigonometric curves as special cases without rational form, it can be used as an efficient new model for geometric design in the fields of CAD/CAM.
C2 quartic spline surface interpolation
Institute of Scientific and Technical Information of China (English)
张彩明; 汪嘉业
2002-01-01
This paper discusses the problem of constructing C2 quartic spline surface interpolation. Decreasing the continuity of the quartic spline to C2 offers additional freedom degrees that can be used to adjust the precision and the shape of the interpolation surface. An approach to determining the freedom degrees is given, the continuity equations for constructing C2 quartic spline curve are discussed, and a new method for constructing C2 quartic spline surface is presented. The advantages of the new method are that the equations that the surface has to satisfy are strictly row diagonally dominant, and the discontinuous points of the surface are at the given data points. The constructed surface has the precision of quartic polynomial. The comparison of the interpolation precision of the new method with cubic and quartic spline methods is included.
B-splines on 3-D tetrahedron partition in four-directional mesh
Institute of Scientific and Technical Information of China (English)
SUN; Jiachang
2001-01-01
［1］ de Boor, C., Hllig, K., Riemannschneider, S. D., Box Splines, New York: Springer-Verlag, 1993.［2］ Dahmen, W., Micchelli, C. A., Recent Process in Multivariate Splines, Interpolating Cardinal Splines as Their Degree Tends to Infinity (ed. Ward, J.), New York: Academic Press, 1983, 27.［3］ de Boor, C., Topics in multivariate approximation theory, in Topics in Numerical Analysis, Lecture Notes in Mathematics (ed. Turner, P. R.), Vol. 965, New York: Springer-Verlag, 1982, 39.［4］ de Boor, C., B-form basics, in Geometric Modelling (ed. Farin, G.), Philadephia: SIAM, 1987, 131.［5］ Chui, C. K., Wang, R. H., Spaces of bivariate cubic and quartic splines on type-1 triangulations, J. Math. Anal. Appl., 1984, 101: 540.［6］ Jia, R. Q., Approximation order from certain spaces of smooth bivariate splines on a three-direction mesh, Trans. AMS, 1986, 295: 199.［7］ Dahmen, W., On multivariate B-splines, SIAM J. Numer. Anal., 1980, 17: 179.［8］ Sun Jiachang, The B-net structure and recurrence algorithms for B-splines on a three direction mesh, Mathematica Numerica Sinica, 1990, 12: 365.［9］ Sun Jiachang, Some results on the field of spline theory and its applications, Contemporary Mathematics, 1994, 163: 127.［10］ Sun Jiachang, Dual bases and quasi-interpolation of B-splines on S13 with three direction meshes, Acta Mathematicae Applicatae Sinica, 1991, 14: 170.［11］ Wang, R. H., He, T. X., Liu, X. Y. Et al., An integral method for constructing bivariate spline functions, J. Comp. Math., 1989, 7: 244.［12］ Wang, R. H., Shi, X. Q., A kind of C interpolation in the n-dimensional finite element method, J. Math. Res. And Exp., 1989, 9: 173.［13］ Shi, X. Q., Wang, R. H., The existence conditions of space S12(Δn), Chinese Science Bulletin, 1989, 34: 2015.
Space cutter compensation method for five-axis nonuniform rational basis spline machining
Directory of Open Access Journals (Sweden)
Yanyu Ding
2015-07-01
Full Text Available In view of the good machining performance of traditional three-axis nonuniform rational basis spline interpolation and the space cutter compensation issue in multi-axis machining, this article presents a triple nonuniform rational basis spline five-axis interpolation method, which uses three nonuniform rational basis spline curves to describe cutter center location, cutter axis vector, and cutter contact point trajectory, respectively. The relative position of the cutter and workpiece is calculated under the workpiece coordinate system, and the cutter machining trajectory can be described precisely and smoothly using this method. The three nonuniform rational basis spline curves are transformed into a 12-dimentional Bézier curve to carry out discretization during the discrete process. With the cutter contact point trajectory as the precision control condition, the discretization is fast. As for different cutters and corners, the complete description method of space cutter compensation vector is presented in this article. Finally, the five-axis nonuniform rational basis spline machining method is further verified in a two-turntable five-axis machine.
Data Visualization using Spline Functions
Directory of Open Access Journals (Sweden)
Maria Hussain
2013-10-01
Full Text Available A two parameter family of C1 rational cubic spline functions is presented for the graphical representation of shape preserving curve interpolation for shaped data. These parameters have a direct impact on the shape of the curve. Constraints are developed on one family of the parameters to visualize positive, monotone and convex data while other family of parameters can assume any positive values. The problem of visualization of constrained data is also addressed when the data is lying above a straight line and curve is required to lie on the same side of the line. The approximation order of the proposed rational cubic function is also investigated and is found to be O(h3 .
Positivity Preserving Interpolation Using Rational Bicubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2015-01-01
Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.
Institute of Scientific and Technical Information of China (English)
Juan Chen; Chong-Jun Li; Wan-Ji Chen
2011-01-01
In this paper,a 13-node pyramid spline element is derived by using the tetrahedron volume coordinates and the B-net method,which achieves the second order completeness in Cartesian coordinates.Some appropriate examples were employed to evaluate the performance of the proposed element.The numerical results show that the spline element has much better performance compared with the isoparametric serendipity element Q20 and its degenerate pyramid element P13 especially when mesh is distorted,and it is comparable to the Lagrange element Q27.It has been demonstrated that the spline finite element method is an efficient tool for developing high accuracy elements.
Non-Rigid Image Registration Algorithm Based on B-Splines Approximation
Institute of Scientific and Technical Information of China (English)
ZHANG Hongying; ZHANG Jiawan; SUN Jizhou; SUN Yigang
2007-01-01
An intensity-based non-rigid registration algorithm is discussed, which uses Gaussian smoothing to constrain the transformation to be smooth, and thus preserves the topology of images. In view of the insufficiency of the uniform Gaussian filtering of the deformation field, an automatic and accurate non-rigid image registration method based on B-splines approximation is proposed. The regularization strategy is adopted by using multi-level B-splines approximation to regularize the dis-placement fields in a coarse-to-fine manner. Moreover, it assigns the different weights to the estimated displacements according to their reliabilities. In this way, the level of regularity can be adapted locally. Experiments were performed on both synthetic and real medical images of brain, and the results show that the proposed method improves the registration accuracy and robustness.
A Bayesian hierarchical model for accident and injury surveillance.
MacNab, Ying C
2003-01-01
This article presents a recent study which applies Bayesian hierarchical methodology to model and analyse accident and injury surveillance data. A hierarchical Poisson random effects spatio-temporal model is introduced and an analysis of inter-regional variations and regional trends in hospitalisations due to motor vehicle accident injuries to boys aged 0-24 in the province of British Columbia, Canada, is presented. The objective of this article is to illustrate how the modelling technique can be implemented as part of an accident and injury surveillance and prevention system where transportation and/or health authorities may routinely examine accidents, injuries, and hospitalisations to target high-risk regions for prevention programs, to evaluate prevention strategies, and to assist in health planning and resource allocation. The innovation of the methodology is its ability to uncover and highlight important underlying structure of the data. Between 1987 and 1996, British Columbia hospital separation registry registered 10,599 motor vehicle traffic injury related hospitalisations among boys aged 0-24 who resided in British Columbia, of which majority (89%) of the injuries occurred to boys aged 15-24. The injuries were aggregated by three age groups (0-4, 5-14, and 15-24), 20 health regions (based of place-of-residence), and 10 calendar years (1987 to 1996) and the corresponding mid-year population estimates were used as 'at risk' population. An empirical Bayes inference technique using penalised quasi-likelihood estimation was implemented to model both rates and counts, with spline smoothing accommodating non-linear temporal effects. The results show that (a) crude rates and ratios at health region level are unstable, (b) the models with spline smoothing enable us to explore possible shapes of injury trends at both the provincial level and the regional level, and (c) the fitted models provide a wealth of information about the patterns (both over space and time
Lesaffre, Emmanuel
2012-01-01
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd
Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2008-05-15
An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.
Twelfth degree spline with application to quadrature.
Mohammed, P O; Hamasalh, F K
2016-01-01
In this paper existence and uniqueness of twelfth degree spline is proved with application to quadrature. This formula is in the class of splines of degree 12 and continuity order [Formula: see text] that matches the derivatives up to order 6 at the knots of a uniform partition. Some mistakes in the literature are pointed out and corrected. Numerical examples are given to illustrate the applicability and efficiency of the new method.
P-Splines Using Derivative Information
Calderon, Christopher P.
2010-01-01
Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.
新家, 健精
2013-01-01
© 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography
Bayesian spatial semi-parametric modeling of HIV variation in Kenya.
Directory of Open Access Journals (Sweden)
Oscar Ngesa
Full Text Available Spatial statistics has seen rapid application in many fields, especially epidemiology and public health. Many studies, nonetheless, make limited use of the geographical location information and also usually assume that the covariates, which are related to the response variable, have linear effects. We develop a Bayesian semi-parametric regression model for HIV prevalence data. Model estimation and inference is based on fully Bayesian approach via Markov Chain Monte Carlo (McMC. The model is applied to HIV prevalence data among men in Kenya, derived from the Kenya AIDS indicator survey, with n = 3,662. Past studies have concluded that HIV infection has a nonlinear association with age. In this study a smooth function based on penalized regression splines is used to estimate this nonlinear effect. Other covariates were assumed to have a linear effect. Spatial references to the counties were modeled as both structured and unstructured spatial effects. We observe that circumcision reduces the risk of HIV infection. The results also indicate that men in the urban areas were more likely to be infected by HIV as compared to their rural counterpart. Men with higher education had the lowest risk of HIV infection. A nonlinear relationship between HIV infection and age was established. Risk of HIV infection increases with age up to the age of 40 then declines with increase in age. Men who had STI in the last 12 months were more likely to be infected with HIV. Also men who had ever used a condom were found to have higher likelihood to be infected by HIV. A significant spatial variation of HIV infection in Kenya was also established. The study shows the practicality and flexibility of Bayesian semi-parametric regression model in analyzing epidemiological data.
The structure of uniform B-spline curves with parameters
Institute of Scientific and Technical Information of China (English)
Juan Cao; Guozhao Wang
2008-01-01
The shape-adjustable curve constructed by uniform B-spline basis function with parameter is an extension of uniform B-spline curve. In this paper, we study the relation between the uniform B-spline basis functions with parameter and the B-spline basis functions. Based on the degree elevation of B-spline, we extend the uniform B-spline basis functions with parameter to ones with multiple parameters. Examples show that the proposed basis functions provide more flexibility for curve design.
Optimization and dynamics of protein-protein complexes using B-splines.
Gillilan, Richard E; Lilien, Ryan H
2004-10-01
A moving-grid approach for optimization and dynamics of protein-protein complexes is introduced, which utilizes cubic B-spline interpolation for rapid energy and force evaluation. The method allows for the efficient use of full electrostatic potentials joined smoothly to multipoles at long distance so that multiprotein simulation is possible. Using a recently published benchmark of 58 protein complexes, we examine the performance and quality of the grid approximation, refining cocrystallized complexes to within 0.68 A RMSD of interface atoms, close to the optimum 0.63 A produced by the underlying MMFF94 force field. We quantify the theoretical statistical advantage of using minimization in a stochastic search in the case of two rigid bodies, and contrast it with the underlying cost of conjugate gradient minimization using B-splines. The volumes of conjugate gradient minimization basins of attraction in cocrystallized systems are generally orders of magnitude larger than well volumes based on energy thresholds needed to discriminate native from nonnative states; nonetheless, computational cost is significant. Molecular dynamics using B-splines is doubly efficient due to the combined advantages of rapid force evaluation and large simulation step sizes. Large basins localized around the native state and other possible binding sites are identifiable during simulations of protein-protein motion. In addition to providing increased modeling detail, B-splines offer new algorithmic possibilities that should be valuable in refining docking candidates and studying global complex behavior.
Bernardo, Jose M
2000-01-01
This highly acclaimed text, now available in paperback, provides a thorough account of key concepts and theoretical results, with particular emphasis on viewing statistical inference as a special case of decision theory. Information-theoretic concepts play a central role in the development of the theory, which provides, in particular, a detailed discussion of the problem of specification of so-called prior ignorance . The work is written from the authors s committed Bayesian perspective, but an overview of non-Bayesian theories is also provided, and each chapter contains a wide-ranging critica
Fast adaptive elliptical filtering using box splines
Chaudhury, Kunal Narayan; Unser, Michael
2009-01-01
We demonstrate that it is possible to filter an image with an elliptic window of varying size, elongation and orientation with a fixed computational cost per pixel. Our method involves the application of a suitable global pre-integrator followed by a pointwise-adaptive localization mesh. We present the basic theory for the 1D case using a B-spline formalism and then appropriately extend it to 2D using radially-uniform box splines. The size and ellipticity of these radially-uniform box splines is adaptively controlled. Moreover, they converge to Gaussians as the order increases. Finally, we present a fast and practical directional filtering algorithm that has the capability of adapting to the local image features.
Multiple products of B-splines used in CAD system
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The function upgrade of computer aided design (CAD) system requested that the multiple product of B-spline functions should be represented as a linear combination of some suitable (usually higher-degree) B-splines. In this paper, we apply the theory of spline space and discrete B-splines to deduce the representation of the coefficients of all terms of the linear combination, which can be directly applied to software coding in system development.
On the Nesting Behavior of T-splines
2011-05-01
splines were originally introduced as a superior alternative to NURBS [1] and have emerged as an important technology across several disciplines...watertight geometry and can be locally re- fined [2, 3]. These basic properties make it possible to merge multiple NURBS patches into a single T...spline [4, 1] and any trimmed NURBS model can be represented as a watertight T-spline [5]. T-splines are an ideal discretization technology for
Scripted Bodies and Spline Driven Animation
DEFF Research Database (Denmark)
Erleben, Kenny; Henriksen, Knud
2002-01-01
In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain of the sp......In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...
Villalba, Jesús
2015-01-01
In this document we are going to derive the equations needed to implement a Variational Bayes estimation of the parameters of the simplified probabilistic linear discriminant analysis (SPLDA) model. This can be used to adapt SPLDA from one database to another with few development data or to implement the fully Bayesian recipe. Our approach is similar to Bishop's VB PPCA.
Campanelli, L
2016-01-01
In the Ratra scenario of inflationary magnetogenesis, the kinematic coupling between the photon and the inflaton undergoes a nonanalytical jump at the end of inflation. Using smooth interpolating analytical forms of the coupling function, we show that such unphysical jump does not invalidate the main prediction of the model, which still represents a viable mechanism for explaining cosmic magnetization. Nevertheless, there is a spurious result associated with the nonanaliticity of the coupling, to wit, the prediction that the spectrum of created photons has a power-law decay in the ultraviolet regime. This issue is discussed using both semiclassical approximation and smooth coupling functions.
B-spline parameterization of spatial response in a monolithic scintillation camera
Solovov, V; Chepel, V; Domingos, V; Martins, R
2016-01-01
A framework for parameterization of the light response functions (LRFs) in a scintillation camera was developed. It is based on approximation of the measured or simulated photosensor response with weighted sums of uniform cubic B-splines or their tensor products. The LRFs represented in this way are smooth, computationally inexpensive to evaluate and require much less memory than non-parametric alternatives. The parameters are found in a straightforward way by the linear least squares method. The use of linear fit makes the fitting process stable and predictable enough to be used in non-supervised mode. Several techniques that allow to reduce the storage and processing power requirements were developed. A software library for fitting simulated and measured light response with spline functions was developed and integrated into an open source software package ANTS2 designed for simulation and data processing for Anger camera-type detectors.
B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation
Directory of Open Access Journals (Sweden)
Frederic Precioso
2002-06-01
Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.
Left ventricular motion reconstruction with a prolate spheroidal B-spline model
Energy Technology Data Exchange (ETDEWEB)
Li Jin; Denney, Thomas S Jr [Electrical and Computer Engineering Department, 200 Broun Hall, Auburn University, AL 36849-5201 (United States)
2006-02-07
Tagged cardiac magnetic resonance (MR) imaging can non-invasively image deformation of the left ventricular (LV) wall. Three-dimensional (3D) analysis of tag data requires fitting a deformation model to tag lines in the image data. In this paper, we present a 3D myocardial displacement and strain reconstruction method based on a B-spline deformation model defined in prolate spheroidal coordinates, which more closely matches the shape of the LV wall than existing Cartesian or cylindrical coordinate models. The prolate spheroidal B-spline (PSB) deformation model also enforces smoothness across and can compute strain at the apex. The PSB reconstruction algorithm was evaluated on a previously published data set to allow head-to-head comparison of the PSB model with existing LV deformation reconstruction methods. We conclude that the PSB method can accurately reconstruct deformation and strain in the LV wall from tagged MR images and has several advantages relative to existing techniques.
Left ventricular motion reconstruction with a prolate spheroidal B-spline model
Li, Jin; Denney, Thomas S., Jr.
2006-02-01
Tagged cardiac magnetic resonance (MR) imaging can non-invasively image deformation of the left ventricular (LV) wall. Three-dimensional (3D) analysis of tag data requires fitting a deformation model to tag lines in the image data. In this paper, we present a 3D myocardial displacement and strain reconstruction method based on a B-spline deformation model defined in prolate spheroidal coordinates, which more closely matches the shape of the LV wall than existing Cartesian or cylindrical coordinate models. The prolate spheroidal B-spline (PSB) deformation model also enforces smoothness across and can compute strain at the apex. The PSB reconstruction algorithm was evaluated on a previously published data set to allow head-to-head comparison of the PSB model with existing LV deformation reconstruction methods. We conclude that the PSB method can accurately reconstruct deformation and strain in the LV wall from tagged MR images and has several advantages relative to existing techniques.
Geometry Modeling of Ship Hull Based on Non-uniform B-spline
Institute of Scientific and Technical Information of China (English)
WANG Hu; ZOU Zao-jian
2008-01-01
In order to generate the three-dimensional (3-D) hull surface accurately and smoothly, a mixed method which is made up of non-uniform B-spline together with an iterative procedure was developed. By using the iterative method the data points on each section curve are calculated and the generalized waterlines and transverse section curves are determined. Then using the non-uniform B-spline expression, the control vertex net of the hull is calculated based on the generalized waterlines and section curves. A ship with tunnel stern was taken as test case. The numerical results prove that the proposed approach for geometry modeling of 3-D ship hull surface is accurate and effective.
A multiresolution analysis for tensor-product splines using weighted spline wavelets
Kapl, Mario; Jüttler, Bert
2009-09-01
We construct biorthogonal spline wavelets for periodic splines which extend the notion of "lazy" wavelets for linear functions (where the wavelets are simply a subset of the scaling functions) to splines of higher degree. We then use the lifting scheme in order to improve the approximation properties with respect to a norm induced by a weighted inner product with a piecewise constant weight function. Using the lifted wavelets we define a multiresolution analysis of tensor-product spline functions and apply it to image compression of black-and-white images. By performing-as a model problem-image compression with black-and-white images, we demonstrate that the use of a weight function allows to adapt the norm to the specific problem.
Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
Dye, H A
2011-01-01
We construct two knot invariants. The first knot invariant is a sum constructed using linking numbers. The second is an invariant of flat knots and is a formal sum of flat knots obtained by smoothing pairs of crossings. This invariant can be used in conjunction with other flat invariants, forming a family of invariants. Both invariants are constructed using the parity of a crossing.
A Bayesian approach to image expansion for improved definition.
Schultz, R R; Stevenson, R L
1994-01-01
Accurate image expansion is important in many areas of image analysis. Common methods of expansion, such as linear and spline techniques, tend to smooth the image data at edge regions. This paper introduces a method for nonlinear image expansion which preserves the discontinuities of the original image, producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation techniques that are proposed for noise-free and noisy images result in the optimization of convex functionals. The expanded images produced from these methods will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion.
FORMATION OF SHAFT SPLINES USING ROLLING METHOD
Directory of Open Access Journals (Sweden)
M. Sidorenko
2012-01-01
Full Text Available The paper describes design of rolling heads used for cold rolling of straight-sided splines on shafts and presents theoretical principles of this process. These principles make it possible to calculate an effort which is required for pushing billet through rolling-on rolls with due account of metal hardening during deformation.
DEFICIENT CUBIC SPLINES WITH AVERAGE SLOPE MATCHING
Institute of Scientific and Technical Information of China (English)
V. B. Das; A. Kumar
2005-01-01
We obtain a deficient cubic spline function which matches the functions with certain area matching over a greater mesh intervals, and also provides a greater flexibility in replacing area matching as interpolation. We also study their convergence properties to the interpolating functions.
REAL ROOT ISOLATION OF SPLINE FUNCTIONS
Institute of Scientific and Technical Information of China (English)
Renhong Wang; Jinming Wu
2008-01-01
In this paper,we propose an algorithm for isolating real roots of a given univariate spline function,which is based on the use of Descartes' rule of signs and de Casteljau algorithm.Numerical examples illustrate the flexibility and effectiveness of the algorithm.
MOTION VELOCITY SMOOTH LINK IN HIGH SPEED MACHINING
Institute of Scientific and Technical Information of China (English)
REN Kun; FU Jianzhong; CHEN Zichen
2007-01-01
To deal with over-shooting and gouging in high speed machining, a novel approach for velocity smooth link is proposed. Considering discrete tool path, cubic spline curve fitting is used to find dangerous points, and according to spatial geometric properties of tool path and the kinematics theory, maximum optimal velocities at dangerous points are obtained. Based on method of velocity control characteristics stored in control system, a fast algorithm for velocity smooth link is analyzed and formulated. On-line implementation results show that the proposed approach makes velocity changing more smoothly compared with traditional velocity control methods and improves productivity greatly.
Hedlund, Jonas
2014-01-01
This paper introduces private sender information into a sender-receiver game of Bayesian persuasion with monotonic sender preferences. I derive properties of increasing differences related to the precision of signals and use these to fully characterize the set of equilibria robust to the intuitive criterion. In particular, all such equilibria are either separating, i.e., the sender's choice of signal reveals his private information to the receiver, or fully disclosing, i.e., the outcome of th...
Kirstein, Roland
2005-01-01
This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...
A Geometric Approach for Multi-Degree Spline
Institute of Scientific and Technical Information of China (English)
Xin Li; Zhang-Jin Huang; Zhao Liu
2012-01-01
Multi-degree spline (MD-spline for short) is a generalization of B-spline which comprises of polynomial segments of various degrees.The present paper provides a new definition for MD-spline curves in a geometric intuitive way based on an efficient and simple evaluation algorithm.MD-spline curves maintain various desirable properties of B-spline curves,such as convex hull,local support and variation diminishing properties.They can also be refined exactly with knot insertion.The continuity between two adjacent segments with different degrees is at least C1 and that between two adjacent segments of same degrees d is Cd-1.Benefited by the exact refinement algorithm,we also provide several operators for MD-spline curves,such as converting each curve segment into Bézier form,an efficient merging algorithm and a new curve subdivision scheme which allows different degrees for each segment.
Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel
2013-01-01
Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean
COMPACT SUPPORT THIN PLATE SPLINE ALGORITHM
Institute of Scientific and Technical Information of China (English)
Li Jing; Yang Xuan; Yu Jianping
2007-01-01
Common tools based on landmarks in medical image elastic registration are Thin Plate Spline (TPS) and Compact Support Radial Basis Function (CSRBF). TPS forces the corresponding landmarks to exactly match each other and minimizes the bending energy of the whole image. However,in real application, such scheme would deform the image globally when deformation is only local.CSRBF needs manually determine the support size, although its deformation is limited local. Therefore,to limit the effect of the deformation, new Compact Support Thin Plate Spline algorithm (CSTPS) is approached, analyzed and applied. Such new approach gains optimal mutual information, which shows its registration result satisfactory. The experiments also show it can apply in both local and global elastic registration.
Marginal longitudinal semiparametric regression via penalized splines
Al Kadiri, M.
2010-08-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
Marginal longitudinal semiparametric regression via penalized splines.
Kadiri, M Al; Carroll, R J; Wand, M P
2010-08-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
Nielsen, J D; Dean, C B
2008-09-01
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
Variational Splines and Paley--Wiener Spaces on Combinatorial Graphs
Pesenson, Isaac
2011-01-01
Notions of interpolating variational splines and Paley-Wiener spaces are introduced on a combinatorial graph G. Both of these definitions explore existence of a combinatorial Laplace operator onG. The existence and uniqueness of interpolating variational splines on a graph is shown. As an application of variational splines, the paper presents a reconstruction algorithm of Paley-Wiener functions on graphs from their uniqueness sets.
Application of spline wavelet transform in differential of electroanalytical signal
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Investigating characteristics of spline wavelet, we found that if the two-order spline function, the derivative function of the three-order B spline function, is used as the wavelet base function, the spline wavelet transform has both the property of denoising and that of differential. As a result, the relation between the spline wavelet transform and the differential was studied in theory. Experimental results show that the spline wavelet transform can well be applied to the differential of the electroanalytical signal. Compared with other kinds of wavelet transform, the spline wavelet trans-form has a characteristic of differential. Compared with the digital differential and simulative differential with electronic circuit, the spline wavelet transform not only can carry out denoising and differential for a signal, but also has the ad-vantages of simple operation and small quantity of calcula-tion, because step length, RC constant and other kinds of parameters need not be selected. Compared with Alexander Kai-man Leung's differential method, the differential method with spline wavelet transform has the characteristic that the differential order is not dependent on the number of data points in the original signal.
Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method
Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.
1997-01-01
A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.
Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines
Bartoň, Michael
2015-10-24
We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.
Introduction to Bayesian statistics
Bolstad, William M
2017-01-01
There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...
Trigonometric polynomial B-spline with shape parameter
Institute of Scientific and Technical Information of China (English)
WANG Wentao; WANG Guozhao
2004-01-01
The basis function of n order trigonometric polynomial B-spline with shape parameter is constructed by an integral approach. The shape of the constructed curve can be adjusted by changing the shape parameter and it has most of the properties of B-spline. The ellipse and circle can be accurately represented by this basis function.
Exponential B-splines and the partition of unity property
DEFF Research Database (Denmark)
Christensen, Ole; Massopust, Peter
2012-01-01
We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...
Nonlinear and fault-tolerant flight control using multivariate splines
Tol, H.J.; De Visser, C.C.; Van Kampen, E.J.; Chu, Q.P.
2015-01-01
This paper presents a study on fault tolerant flight control of a high performance aircraft using multivariate splines. The controller is implemented by making use of spline model based adaptive nonlinear dynamic inversion (NDI). This method, indicated as SANDI, combines NDI control with nonlinear c
A family of quasi-cubic blended splines and applications
Institute of Scientific and Technical Information of China (English)
SU Ben-yue; TAN Jie-qing
2006-01-01
A class of quasi-cubic B-spline base functions by trigonometric polynomials are established which inherit properties similar to those of cubic B-spline bases. The corresponding curves with a shape parameter α, defined by the introduced base functions, include the B-spline curves and can approximate the B-spline curves from both sides. The curves can be adjusted easily by using the shape parameter α, where dpi(α,t) is linear with respect to dα for the fixed t. With the shape parameter chosen properly,the defined curves can be used to precisely represent straight line segments, parabola segments, circular arcs and some transcendental curves, and the corresponding tensor product surfaces can also represent spherical surfaces, cylindrical surfaces and some transcendental surfaces exactly. By abandoning positive property, this paper proposes a new C2 continuous blended interpolation spline based on piecewise trigonometric polynomials associated with a sequence of local parameters. Illustration showed that the curves and surfaces constructed by the blended spline can be adjusted easily and freely. The blended interpolation spline curves can be shape-preserving with proper local parameters since these local parameters can be considered to be the magnification ratio to the length of tangent vectors at the interpolating points. The idea is extended to produce blended spline surfaces.
Positivity and Monotonicity Preserving Biquartic Rational Interpolation Spline Surface
Directory of Open Access Journals (Sweden)
Xinru Liu
2014-01-01
Full Text Available A biquartic rational interpolation spline surface over rectangular domain is constructed in this paper, which includes the classical bicubic Coons surface as a special case. Sufficient conditions for generating shape preserving interpolation splines for positive or monotonic surface data are deduced. The given numeric experiments show our method can deal with surface construction from positive or monotonic data effectively.
Institute of Scientific and Technical Information of China (English)
Joong-Hyun Rhim; Doo-Yeoun Cho; Kyu-Yeul Lee; Tae-Wan Kim
2006-01-01
We propose a method that automatically generates discrete bicubic G1 continuous B-spline surfaces that interpolate the curve network of a ship hullform. First, the curves in the network are classified into two types: boundary curves and "reference curves". The boundary curves correspond to a set of rectangular (or triangular) topological type that can be represented with tensor-product (or degenerate) B-spline surface patches. Next, in the interior of the patches,surface fitting points and cross boundary derivatives are estimated from the reference curves by constructing "virtual" isoparametric curves. Finally, a discrete G1 continuous B-spline surface is generated by a surface fitting algorithm. Several smooth ship hullform surfaces generated from curve networks corresponding to actual ship hullforms demonstrate the quality of the method.
Cylindrical Helix Spline Approximation of Spatial Curves
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, we present a new method for approximating spatial curves with a G1 cylindrical helix spline within a prescribed tolerance. We deduce the general formulation of a cylindrical helix,which has 11 freedoms. This means that it needs 11 restrictions to determine a cylindrical helix. Given a spatial parametric curve segment, including the start point and the end point of this segment, the tangent and the principal normal of the start point, we can always find a cylindrical segment to interpolate the given direction and position vectors. In order to approximate the known parametric curve within the prescribed tolerance, we adopt the trial method step by step. First, we must ensure the helix segment to interpolate the given two end points and match the principal normal and tangent of the start point, and then, we can keep the deviation between the cylindrical helix segment and the known curve segment within the prescribed tolerance everywhere. After the first segment had been formed, we can construct the next segment. Circularly, we can construct the G1 cylindrical helix spline to approximate the whole spatial parametric curve within the prescribed tolerance. Several examples are also given to show the efficiency of this method.
Bayesian artificial intelligence
Korb, Kevin B
2003-01-01
As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.
Bayesian artificial intelligence
Korb, Kevin B
2010-01-01
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente
Nonequilibrium Flows with Smooth Particle Applied Mechanics.
Kum, Oyeon
Smooth particle methods are relatively new methods for simulating solid and fluid flows though they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expressions for (urho) and (Trho), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier's heat -flow law and Newton's viscous force law are used. Smooth particle methods show an interesting parallel linking them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh -Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails. Considerably fewer smooth particles are required than atoms in a corresponding molecular dynamics
Robust Filtering and Smoothing with Gaussian Processes
Deisenroth, Marc Peter; Turner, Ryan; Huber, Marco F.; Hanebeck, Uwe D.; Rasmussen, Carl Edward
2012-01-01
We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models. GPs are gaining increasing importance in signal processing, machine learning, robotics, and control for representing unknown system functions by posterior probability distributions. This modern way of "system identification" is more robust than finding p...
Nonequilibrium flows with smooth particle applied mechanics
Energy Technology Data Exchange (ETDEWEB)
Kum, O.
1995-07-01
Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (u{rho}) and (T{rho}), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.
Nonequilibrium flows with smooth particle applied mechanics
Energy Technology Data Exchange (ETDEWEB)
Kum, Oyeon [Univ. of California, Davis, CA (United States)
1995-07-01
Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (uρ) and (Tρ), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.
Applied Bayesian Hierarchical Methods
Congdon, Peter D
2010-01-01
Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.
Relative Smooth Topological Spaces
Directory of Open Access Journals (Sweden)
B. Ghazanfari
2009-01-01
Full Text Available In 1992, Ramadan introduced the concept of a smooth topological space and relativeness between smooth topological space and fuzzy topological space in Chang's (1968 view points. In this paper we give a new definition of smooth topological space. This definition can be considered as a generalization of the smooth topological space which was given by Ramadan. Some general properties such as relative smooth continuity and relative smooth compactness are studied.
Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B
2013-01-01
FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear
Shape Designing of Engineering Images Using Rational Spline Interpolation
Directory of Open Access Journals (Sweden)
Muhammad Sarfraz
2015-01-01
Full Text Available In modern days, engineers encounter a remarkable range of different engineering problems like study of structure, structure properties, and designing of different engineering images, for example, automotive images, aerospace industrial images, architectural designs, shipbuilding, and so forth. This paper purposes an interactive curve scheme for designing engineering images. The purposed scheme furnishes object designing not just in the area of engineering, but it is equally useful for other areas including image processing (IP, Computer Graphics (CG, Computer-Aided Engineering (CAE, Computer-Aided Manufacturing (CAM, and Computer-Aided Design (CAD. As a method, a piecewise rational cubic spline interpolant, with four shape parameters, has been purposed. The method provides effective results together with the effects of derivatives and shape parameters on the shape of the curves in a local and global manner. The spline method, due to its most generalized description, recovers various existing rational spline methods and serves as an alternative to various other methods including v-splines, gamma splines, weighted splines, and beta splines.
Use of B-Spline in the Finite Element Analysis: Comparison with ANCF Geometry
2011-02-04
formulations developed in this paper. 15. SUBJECT TERMS Geometric discontinuities; Finite element; Multibody systems; B-spline; NURBS 16. SECURITY...Keywords: Geometric discontinuities; Finite element; Multibody systems; B-spline; NURBS . UNCLAS: Dist A. Approved for public release 3 1...developed by computational geometry methods such as B- spline and NURBS (Non-Uniform Rational B-Splines) representations. This fact has motivated
Testing for additivity with B-splines
Institute of Scientific and Technical Information of China (English)
Heng-jian CUI; Xu-ming HE; Li LIU
2007-01-01
Regression splines are often used for fitting nonparametric functions, and they work especially well for additivity models. In this paper, we consider two simple tests of additivity: an adaptation of Tukey's one degree of freedom test and a nonparametric version of Rao's score test. While the Tukey-type test can detect most forms of the local non-additivity at the parametric rate of O(n-1/2), the score test is consistent for all alternative at a nonparametric rate. The asymptotic distribution of these test statistics is derived under both the null and local alternative hypotheses. A simulation study is conducted to compare their finite-sample performances with some existing kernelbased tests. The score test is found to have a good overall performance.
Testing for additivity with B-splines
Institute of Scientific and Technical Information of China (English)
2007-01-01
Regression splines are often used for fitting nonparametric functions, and they work especially well for additivity models. In this paper, we consider two simple tests of additivity: an adaptation of Tukey’s one degree of freedom test and a nonparametric version of Rao’s score test. While the Tukey-type test can detect most forms of the local non-additivity at the parametric rate of O(n-1/2), the score test is consistent for all alternative at a nonparametric rate. The asymptotic distribution of these test statistics is derived under both the null and local alternative hypotheses. A simulation study is conducted to compare their finite-sample performances with some existing kernel-based tests. The score test is found to have a good overall performance.
Numerical simulation of involutes spline shaft in cold rolling forming
Institute of Scientific and Technical Information of China (English)
王志奎; 张庆
2008-01-01
Design of forming dies and whole process of simulation of cold rolling involutes spline can be realized by using of CAD software of PRO-E and CAE software of DEFORM-3D. Software DEFORM-3D provides an automatic and optimized remeshing function, especially for the large deformation. In order to use this function sufficiently, simulation of cold rolling involutes spline can be implemented indirectly. The relationship between die and workpiece, forming force and characteristic of deformation in the forming process of cold rolling involutes spline are analyzed and researched. Meanwhile, reliable proofs for the design of dies and deforming equipment are provided.
A Simple and Fast Spline Filtering Algorithm for Surface Metrology.
Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei
2015-01-01
Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement.
Cubic B-spline curve approximation by curve unclamping
Chen, Xiao-Diao; Ma, Weiyin; Paul, Jean-Claude
2010-01-01
International audience; A new approach for cubic B-spline curve approximation is presented. The method produces an approximation cubic B-spline curve tangent to a given curve at a set of selected positions, called tangent points, in a piecewise manner starting from a seed segment. A heuristic method is provided to select the tangent points. The first segment of the approximation cubic B-spline curve can be obtained using an inner point interpolation method, least-squares method or geometric H...
Quartic Box-Spline Reconstruction on the BCC Lattice.
Kim, Minho
2013-02-01
This paper presents an alternative box-spline filter for the body-centered cubic (BCC) lattice, the seven-direction quartic box-spline M7 that has the same approximation order as the eight-direction quintic box-spline M8 but a lower polynomial degree, smaller support, and is computationally more efficient. When applied to reconstruction with quasi-interpolation prefilters, M7 shows less aliasing, which is verified quantitatively by integral filter metrics and frequency error kernels. To visualize and analyze distributional aliasing characteristics, each spectrum is evaluated on the planes and lines with various orientations.
C1 Hermite shape preserving polynomial splines in R3
Gabrielides, Nikolaos C.
2012-06-01
The C 2 variable degree splines1-3 have been proven to be an efficient tool for solving the curve shape-preserving interpolation problem in two and three dimensions. Based on this representation, the current paper proposes a Hermite interpolation scheme, to construct C 1 shape-preserving splines of variable degree. After this, a slight modification of the method leads to a C 1 shape-preserving Hermite cubic spline. Both methods can easily be developed within a CAD system, since they compute directly (without iterations) the B-spline control polygon. They have been implemented and tested within the DNV Software CAD/CAE system GeniE. [Figure not available: see fulltext.
Segmented Regression Based on B-Splines with Solved Examples
Directory of Open Access Journals (Sweden)
Miloš Kaňka
2015-12-01
Full Text Available The subject of the paper is segmented linear, quadratic, and cubic regression based on B-spline basis functions. In this article we expose the formulas for the computation of B-splines of order one, two, and three that is needed to construct linear, quadratic, and cubic regression. We list some interesting properties of these functions. For a clearer understanding we give the solutions of a couple of elementary exercises regarding these functions.
Many-knot spline technique for approximation of data
Institute of Scientific and Technical Information of China (English)
齐东旭; 李华山
1999-01-01
A class of new fundamental functions with compact support called many-knot spline is introduced. The two-scale relation for the fundamental functions is investigated, and the higher order accuracy spline approximation scheme is constructed by using the available degrees of freedom which come from additional knots. The technique has been efficiently applied to the problems such as time-frequency analysis, computer aided geometric design, and digital signal processing.
G1 Continuity Conditions of B－spline Surfaces
Institute of Scientific and Technical Information of China (English)
车翔玖; 梁学章
2002-01-01
According to the B-spline theory and Boehm algorithm,this paper presents several necessary and sufficient G1 continuity conditions betwwen two adjacent B-spline surfaces,In Order to meet the need of application,a kind of sufficient conditions of G1 continuity are developed,and a kind of sufficient conditions of G1 continuity among N(N>2) patch B-shline surfaces meeting at a common corner are given at the end.
三次均匀B样条与α-B样条的扩展%Extended Cubic Uniform B-spline and α-B-spline
Institute of Scientific and Technical Information of China (English)
徐岗; 汪国昭
2008-01-01
Spline curve and surface play an important role in CAD and computer graphics. In this paper, we propose several extensions of cubic uniform B-spline. Then, we present the ex- tensions of interpolating α-B-spline based on the new B-splines and the singular blending technique. The advantage of the ex- tensions is that they have global and local shape parameters. Furthermore, we also investigate their applications in data in- terpolation and polygonal shape deformation.
Generalized B-spline subdivision-surface wavelets for geometry compression.
Bertram, Martin; Duchaineau, Mark A; Hamann, Bernd; Joy, Kenneth I
2004-01-01
We present a new construction of lifted biorthogonal wavelets on surfaces of arbitrary two-manifold topology for compression and multiresolution representation. Our method combines three approaches: subdivision surfaces of arbitrary topology, B-spline wavelets, and the lifting scheme for biorthogonal wavelet construction. The simple building blocks of our wavelet transform are local lifting operations performed on polygonal meshes with subdivision hierarchy. Starting with a coarse, irregular polyhedral base mesh, our transform creates a subdivision hierarchy of meshes converging to a smooth limit surface. At every subdivision level, geometric detail can be expanded from wavelet coefficients and added to the surface. We present wavelet constructions for bilinear, bicubic, and biquintic B-Spline subdivision. While the bilinear and bicubic constructions perform well in numerical experiments, the biquintic construction turns out to be unstable. For lossless compression, our transform can be computed in integer arithmetic, mapping integer coordinates of control points to integer wavelet coefficients. Our approach provides a highly efficient and progressive representation for complex geometries of arbitrary topology.
Calculation of Press Fitting Force for Involute Spline Fit%渐开线花键配合压装力计算
Institute of Scientific and Technical Information of China (English)
王宋军; 陈启云; 李慧军; 由毅; 冯擎峰
2013-01-01
The involute spline coupling is widely applied in automobile industry due to strong transmission torque and high centering accuracy.To assure the concentricity of internal and external splines, the coupling method of major diameter center-ing spline interference fit is mostly adopted between gears and shafts in a vehicle transmission.However, there are no fit cate-gories and calculating method of press fitting force in China′s involute spline fit standard.In this paper, the press fitting force of major diameter centering involute spline is calculated using the press coupling calculation method on a smooth cylindrical surface, thus the basis for establishing the press fitting process of involute spline is provided.% 渐开线花键联结因其传递扭矩大、定心精度高等优点，在汽车行业得到广泛应用。为保证内、外花键同轴度，在汽车变速器中齿轮与轴多采用大径定心的花键过盈配合联结方式。在我国渐开线花键配合标准中没有此配合类别及其压装力计算方法。笔者试以光滑圆柱面过盈联结计算方法计算大径定心的渐开线花键配合压装力，为制定渐开线花键压装工艺提供依据。
Barmpoutis, Angelos; Vemuri, Baba C; Shepherd, Timothy M; Forder, John R
2007-11-01
In this paper, we present novel algorithms for statistically robust interpolation and approximation of diffusion tensors-which are symmetric positive definite (SPD) matrices-and use them in developing a significant extension to an existing probabilistic algorithm for scalar field segmentation, in order to segment diffusion tensor magnetic resonance imaging (DT-MRI) datasets. Using the Riemannian metric on the space of SPD matrices, we present a novel and robust higher order (cubic) continuous tensor product of B-splines algorithm to approximate the SPD diffusion tensor fields. The resulting approximations are appropriately dubbed tensor splines. Next, we segment the diffusion tensor field by jointly estimating the label (assigned to each voxel) field, which is modeled by a Gauss Markov measure field (GMMF) and the parameters of each smooth tensor spline model representing the labeled regions. Results of interpolation, approximation, and segmentation are presented for synthetic data and real diffusion tensor fields from an isolated rat hippocampus, along with validation. We also present comparisons of our algorithms with existing methods and show significantly improved results in the presence of noise as well as outliers.
Bayesian microsaccade detection
Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji
2017-01-01
Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483
Bayesian Games with Intentions
Directory of Open Access Journals (Sweden)
Adam Bjorndahl
2016-06-01
Full Text Available We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.
Bayesian statistics an introduction
Lee, Peter M
2012-01-01
Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel
Understanding Computational Bayesian Statistics
Bolstad, William M
2011-01-01
A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic
Smooth Neutrosophic Topological Spaces
Directory of Open Access Journals (Sweden)
M. K. EL Gayyar
2016-08-01
Full Text Available As a new branch of philosophy, the neutrosophy was presented by Smarandache in 1980. It was presented as the study of origin, nature, and scope of neutralities; as well as their interactions with different ideational spectra. The aim in this paper is to introduce the concepts of smooth neutrosophic topological space, smooth neutrosophic cotopological space, smooth neutrosophic closure, and smooth neutrosophic interior. Furthermore, some properties of these concepts will be investigated.
Smooth Neutrosophic Topological Spaces
M. K. EL GAYYAR
2016-01-01
As a new branch of philosophy, the neutrosophy was presented by Smarandache in 1980. It was presented as the study of origin, nature, and scope of neutralities; as well as their interactions with different ideational spectra. The aim in this paper is to introduce the concepts of smooth neutrosophic topological space, smooth neutrosophic cotopological space, smooth neutrosophic closure, and smooth neutrosophic interior. Furthermore, some properties of these concepts will be investigated.
Anacleto, Osvaldo; Queen, Catriona; Albers, Casper J.
2013-01-01
Traffic flow data are routinely collected for many networks worldwide. These invariably large data sets can be used as part of a traffic management system, for which good traffic flow forecasting models are crucial. The linear multiregression dynamic model (LMDM) has been shown to be promising for f
An Areal Isotropic Spline Filter for Surface Metrology.
Zhang, Hao; Tong, Mingsi; Chu, Wei
2015-01-01
This paper deals with the application of the spline filter as an areal filter for surface metrology. A profile (2D) filter is often applied in orthogonal directions to yield an areal filter for a three-dimensional (3D) measurement. Unlike the Gaussian filter, the spline filter presents an anisotropic characteristic when used as an areal filter. This disadvantage hampers the wide application of spline filters for evaluation and analysis of areal surface topography. An approximation method is proposed in this paper to overcome the problem. In this method, a profile high-order spline filter serial is constructed to approximate the filtering characteristic of the Gaussian filter. Then an areal filter with isotropic characteristic is composed by implementing the profile spline filter in the orthogonal directions. It is demonstrated that the constructed areal filter has two important features for surface metrology: an isotropic amplitude characteristic and no end effects. Some examples of applying this method on simulated and practical surfaces are analyzed.
A kernel representation for exponential splines with global tension
Barendt, Sven; Fischer, Bernd; Modersitzki, Jan
2009-02-01
Interpolation is a key ingredient in many imaging routines. In this note, we present a thorough evaluation of an interpolation method based on exponential splines in tension. They are based on so-called tension parameters, which allow for a tuning of their properties. As it turns out, these interpolants have very many nice features, which are, however, not born out in the literature. We intend to close this gap. We present for the first time an analytic representation of their kernel which enables one to come up with a space and frequency domain analysis. It is shown that the exponential splines in tension, as a function of the tension parameter, bridging the gap between linear and cubic B-Spline interpolation. For example, with a certain tension parameter, one is able to suppress ringing artefacts in the interpolant. On the other hand, the analysis in the frequency domain shows that one derives a superior signal reconstruction quality as known from the cubic B-Spline interpolation, which, however, suffers from ringing artifacts. With the ability to offer a trade-off between opposing features of interpolation methods we advocate the use of the exponential spline in tension from a practical point of view and use the new kernel representation to qualify the trade-off.
Yuan, Ying; MacKinnon, David P.
2009-01-01
In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…
Metwalli, Nader S; Hu, Xiaoping P; Carew, John D
2010-09-01
Q-ball imaging (QBI) is a high angular resolution diffusion-weighted imaging (HARDI) technique for reconstructing the orientation distribution function (ODF). Some form of smoothing or regularization is typically required in the ODF reconstruction from low signal-to-noise ratio HARDI data. The amount of smoothing or regularization is usually set a priori at the discretion of the investigator. In this article, we apply an adaptive and objective means of smoothing the raw HARDI data using the smoothing splines on the sphere method with generalized cross-validation (GCV) to estimate the diffusivity profile in each voxel. Subsequently, we reconstruct the ODF, from the smoothed data, based on the Funk-Radon transform (FRT) used in QBI. The spline method was applied to both simulated data and in vivo human brain data. Simulated data show that the smoothing splines on the sphere method with GCV smoothing reduces the mean squared error in estimates of the ODF as compared with the standard analytical QBI approach. The human data demonstrate the utility of the method for estimating smooth ODFs.
Fingerprint Representation Methods Based on B-Spline Functions
Institute of Scientific and Technical Information of China (English)
Ruan Ke; Xia De-lin; Yan Pu-liu
2004-01-01
The global characteristics of a fingerprint image such as the ridge shape and ridge topology are often ignored in most automatic fingerprint verification system. In this paper, a new representative method based on B-Spline curve is proposed to address this problem. The resultant B-Spline curves can represent the global characteristics completely and the curves are analyzable and precise. An algorithm is also proposed to extract the curves from the fingerprint image. In addition to preserve the most information of the fingerprint image, the knot-points number of the B-Spline curve is reduced to minimum in this algorithm. At the same time, the influence of the fingerprint image noise is discussed. In the end, an example is given to demonstrate the effectiveness of the representation method.
RECONSTRUCTION OF SYMMETRIC B-SPLINE CURVES AND SURFACES
Institute of Scientific and Technical Information of China (English)
ZHU Weidong; KE Yinglin
2007-01-01
A method to reconstruct Symmetric B-spline curves and surfaces is presented. The symmetry property is realized by using Symmetric knot vector and Symmetric control points. Firstly, data points are divided into two parts based on the symmetry axis or symmetry plane extracted from data points. Then the divided data points are parameterized and a Symmetric knot vector is selected in order to get Symmetric B-spline basis functions. Constraint equations regarding the control points are deduced to keep the control points of the B-spline curve or surface to be Symmetric with respect to the extracted symmetry axis or symmetry plane. Lastly, the constrained least squares fitting problem is solved with the Lagrange multiplier method. Two examples from industry are given to show that the proposed method is efficient, robust and able to meet the general engineering requirements.
Konstruksi Bayesian Network Dengan Algoritma Bayesian Association Rule Mining Network
Octavian
2015-01-01
Beberapa tahun terakhir, Bayesian Network telah menjadi konsep yang populer digunakan dalam berbagai bidang kehidupan seperti dalam pengambilan sebuah keputusan dan menentukan peluang suatu kejadian dapat terjadi. Sayangnya, pengkonstruksian struktur dari Bayesian Network itu sendiri bukanlah hal yang sederhana. Oleh sebab itu, penelitian ini mencoba memperkenalkan algoritma Bayesian Association Rule Mining Network untuk memudahkan kita dalam mengkonstruksi Bayesian Network berdasarkan data ...
Bayesian recovery of the initial condition for the heat equation
Knapik, B T; van Zanten, J H
2011-01-01
We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a parameter quantifying "smoothness" and show that the corresponding posterior distributions contract around the true parameter at a rate that depends on the smoothness of the true initial condition and the smoothness and scale of the prior. Correct combinations of these characteristics lead to the optimal minimax rate. One type of priors leads to a rate-adaptive Bayesian procedure. The frequentist coverage of credible sets is shown to depend on the combination of the prior and true parameter as well, with smoother priors leading to zero coverage and rougher priors to (extremely) conservative results. In the latter case credible sets are much larger than frequentist confidence sets, in that the ratio of diameters diverges to infinity. The results are numerically illustrated by a simulated data example.
Isogeometric Divergence-conforming B-splines for the Darcy-Stokes-Brinkman Equations
2012-01-01
depicted in Figure 2. A geometrical mapping meeting our criteria could be defined utilizing B-splines or Non-Uniform Rational B- Splines ( NURBS ) on...the coarsest mesh Mh0 . For examples of such mappings, see Chapter 2 of [20]. NURBS mappings are especially useful as they can represent many...Compatible B-splines Two-dimensional Compatible B-splines: NURBS Mapped Domains On NURBS mapped domains, the Piola transform is utilized to map flow velocities
样条型矩阵有理插值%SPLINE-TYPE MATRIX VALUED RATIONAL INTERPOLATION
Institute of Scientific and Technical Information of China (English)
杨松林
2005-01-01
The matrix valued rational interpolation is very useful in the partial realization problem and model reduction for all the linear system theory. Lagrange basic functions have been used in matrix valued rational interpolation. In this paper, according to the property of cardinal spline interpolation, we constructed a kind of spline type matrix valued rational interpolation, which based on cardinal spline. This spline type interpolation can avoid instability of high order polynomial interpolation and we obtained a useful formula.
Shape preserving rational cubic spline for positive and convex data
Directory of Open Access Journals (Sweden)
Malik Zawwar Hussain
2011-11-01
Full Text Available In this paper, the problem of shape preserving C2 rational cubic spline has been proposed. The shapes of the positive and convex data are under discussion of the proposed spline solutions. A C2 rational cubic function with two families of free parameters has been introduced to attain the C2 positive curves from positive data and C2 convex curves from convex data. Simple data dependent constraints are derived on free parameters in the description of rational cubic function to obtain the desired shape of the data. The rational cubic schemes have unique representations.
Construction of generalized magnetic coordinates by B-spline expansion
Energy Technology Data Exchange (ETDEWEB)
Kurata, Michinari [Dept. of Energy Engineering and Science, Graduate School of Engineering, Nagoya Univ., Nagoya, Aichi (Japan); Todoroki, Jiro [National Inst. for Fusion Science, Toki, Gifu (Japan)
2000-06-01
Generalized Magnetic Coordinates (GMC) are curvilinear coordinates ({xi},{eta},{zeta}) in which the magnetic field is expressed in the form B={nabla}{psi}({xi},{eta},{zeta}) x {nabla}{zeta} + H{sup {zeta}}({xi},{eta}){nabla}{xi} x {nabla}{eta}. The coordinates are expanded in Fourier series in the toroidal direction and the B-spline function in other two dimensions to treat the aperiodic model magnetic field. The coordinates are well constructed, but are influenced by the boundary condition in the B-spline expansion. (author)
Vibration Analysis of Beams by Spline Finite Element
Institute of Scientific and Technical Information of China (English)
YANG Hao; SUN Li
2011-01-01
In this paper,the spline finite element method is developed to investigate free vibration problems of beams.The cubic B-spline functions are used to construct the displacement field.The assembly of elements and the introduction of boundary conditions follow the standard finite element procedure.The results under various boundary conditions are compared with those obtained by the exact method and the finite difference method.It shows that the results are in excellent agreement with the analytical results and much more accurate than the results obtained by the finite difference method,especially for higher order modes.
A Fast Approach for Time Optimal and Smooth Trajectory Planning of Robot Manipulators
Institute of Scientific and Technical Information of China (English)
Gang Liu∗; Chao Yun
2016-01-01
In this paper, a fast approach to generate time optimal and smooth trajectory has been developed and tested. Minimum time is critical for the productivity in industrial applications. Meanwhile, smooth trajectories based on cubic splines are desirable for their ability to limit vibrations and ensure the continuity of position, velocity and acceleration during the robot movement. The main feature of the approach is a satisfactory solution that can be obtained by a local modification process among each interval between two consecutive via⁃points. An analytical formulation simplifies the approach to smooth trajectory and few iterations are enough to determine the correct values. The approach can be applied in many robot manipulators which require high performance on time and smooth. The simulation and application of the approach on a palletizer robot are performed, and the experimental results provide evidence that the approach can realize the robot manipulators more efficiency and high smooth performance.
Model Diagnostics for Bayesian Networks
Sinharay, Sandip
2006-01-01
Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…
Locally controlled globally smooth ground surface reconstruction from terrestrial point clouds
Rychkov, Igor
2012-01-01
Approaches to ground surface reconstruction from massive terrestrial point clouds are presented. Using a set of local least squares (LSQR) planes, the "holes" are filled either from the ground model of the next coarser level or by Hermite Radial Basis Functions (HRBF). Global curvature continuous as well as infinitely smooth ground surface models are obtained with Partition of Unity (PU) using either tensor product B-Splines or compactly supported exponential function. The resulting surface function has local control enabling fast evaluation.
Smoothing internal migration age profiles for comparative research
Directory of Open Access Journals (Sweden)
Aude Bernard
2015-05-01
Full Text Available Background: Age patterns are a key dimension to compare migration between countries and over time. Comparative metrics can be reliably computed only if data capture the underlying age distribution of migration. Model schedules, the prevailing smoothing method, fit a composite exponential function, but are sensitive to function selection and initial parameter setting. Although non-parametric alternatives exist, their performance is yet to be established. Objective: We compare cubic splines and kernel regressions against model schedules by assessingwhich method provides an accurate representation of the age profile and best performs on metrics for comparing aggregate age patterns. Methods: We use full population microdata for Chile to perform 1,000 Monte-Carlo simulations for nine sample sizes and two spatial scales. We use residual and graphic analysis to assess model performance on the age and intensity at which migration peaks and the evolution of migration age patterns. Results: Model schedules generate a better fit when (1 the expected distribution of the age profile is known a priori, (2 the pre-determined shape of the model schedule adequately describes the true age distribution, and (3 the component curves and initial parameter values can be correctly set. When any of these conditions is not met, kernel regressions and cubic splines offer more reliable alternatives. Conclusions: Smoothing models should be selected according to research aims, age profile characteristics, and sample size. Kernel regressions and cubic splines enable a precise representation of aggregate migration age profiles for most sample sizes, without requiring parameter setting or imposing a pre-determined distribution, and therefore facilitate objective comparison.
Bayesian Lensing Shear Measurement
Bernstein, Gary M
2013-01-01
We derive an estimator of weak gravitational lensing shear from background galaxy images that avoids noise-induced biases through a rigorous Bayesian treatment of the measurement. The Bayesian formalism requires a prior describing the (noiseless) distribution of the target galaxy population over some parameter space; this prior can be constructed from low-noise images of a subsample of the target population, attainable from long integrations of a fraction of the survey field. We find two ways to combine this exact treatment of noise with rigorous treatment of the effects of the instrumental point-spread function and sampling. The Bayesian model fitting (BMF) method assigns a likelihood of the pixel data to galaxy models (e.g. Sersic ellipses), and requires the unlensed distribution of galaxies over the model parameters as a prior. The Bayesian Fourier domain (BFD) method compresses galaxies to a small set of weighted moments calculated after PSF correction in Fourier space. It requires the unlensed distributi...
Fox, G.J.A.; Berg, van den S.M.; Veldkamp, B.P.; Irwing, P.; Booth, T.; Hughes, D.
2015-01-01
In educational and psychological studies, psychometric methods are involved in the measurement of constructs, and in constructing and validating measurement instruments. Assessment results are typically used to measure student proficiency levels and test characteristics. Recently, Bayesian item resp
Noncausal Bayesian Vector Autoregression
DEFF Research Database (Denmark)
Lanne, Markku; Luoto, Jani
We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...
Granade, Christopher; Cory, D G
2015-01-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
Cubic spline approximation techniques for parameter estimation in distributed systems
Banks, H. T.; Crowley, J. M.; Kunisch, K.
1983-01-01
Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.
The Shape Parameter in the Shifted Surface Spline
Luh, Lin-Tian
2010-01-01
There is a constant c contained in the famous radial basis function shifted surface spline. It's called shape parameter. RBF people only know that this constant is very influential, while its optimal choice is unknown. This paper presents criteria of its optimal choice.
Differential constraints for bounded recursive identification with multivariate splines
De Visser, C.C.; Chu, Q.P.; Mulder, J.A.
2011-01-01
The ability to perform online model identification for nonlinear systems with unknown dynamics is essential to any adaptive model-based control system. In this paper, a new differential equality constrained recursive least squares estimator for multivariate simplex splines is presented that is able
Kriging and thin plate splines for mapping climate variables
Boer, E.P.J.; Beurs, de K.M.; Hartkamp, A.D.
2001-01-01
Four forms of kriging and three forms of thin plate splines are discussed in this paper to predict monthly maximum temperature and monthly mean precipitation in Jalisco State of Mexico. Results show that techniques using elevation as additional information improve the prediction results considerably
Cubic generalized B-splines for interpolation and nonlinear filtering of images
Tshughuryan, Heghine
1997-04-01
This paper presents the introduction and using of the generalized or parametric B-splines, namely the cubic generalized B-splines, in various signal processing applications. The theory of generalized B-splines is briefly reviewed and also some important properties of generalized B-splines are investigated. In this paper it is shown the use of generalized B-splines as a tool to solve the quasioptimal algorithm problem for nonlinear filtering. Finally, the experimental results are presented for oscillatory and other signals and images.
A matrix method for degree-raising of B-spline curves
Institute of Scientific and Technical Information of China (English)
秦开怀
1997-01-01
A new identity is proved that represents the kth order B-splines as linear combinations of the (k + 1) th order B-splines A new method for degree-raising of B-spline curves is presented based on the identity. The new method can be used for all kinds of B-spline curves, that is, both uniform and arbitrarily nonuniform B-spline curves. When used for degree-raising of a segment of a uniform B-spline curve of degree k - 1, it can help obtain a segment of curve of degree k that is still a uniform B-spline curve without raising the multiplicity of any knot. The method for degree-raising of Bezier curves can be regarded as the special case of the new method presented. Moreover, the conventional theory for degree-raising, whose shortcoming has been found, is discussed.
Bayesian Face Sketch Synthesis.
Wang, Nannan; Gao, Xinbo; Sun, Leiyu; Li, Jie
2017-03-01
Exemplar-based face sketch synthesis has been widely applied to both digital entertainment and law enforcement. In this paper, we propose a Bayesian framework for face sketch synthesis, which provides a systematic interpretation for understanding the common properties and intrinsic difference in different methods from the perspective of probabilistic graphical models. The proposed Bayesian framework consists of two parts: the neighbor selection model and the weight computation model. Within the proposed framework, we further propose a Bayesian face sketch synthesis method. The essential rationale behind the proposed Bayesian method is that we take the spatial neighboring constraint between adjacent image patches into consideration for both aforementioned models, while the state-of-the-art methods neglect the constraint either in the neighbor selection model or in the weight computation model. Extensive experiments on the Chinese University of Hong Kong face sketch database demonstrate that the proposed Bayesian method could achieve superior performance compared with the state-of-the-art methods in terms of both subjective perceptions and objective evaluations.
Smooth sandwich gravitational waves
Podolsky, J
1999-01-01
Gravitational waves which are smooth and contain two asymptotically flat regions are constructed from the homogeneous pp-waves vacuum solution. Motion of free test particles is calculated explicitly and the limit to an impulsive wave is also considered.
Bayesian least squares deconvolution
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Hybrid Batch Bayesian Optimization
Azimi, Javad; Fern, Xiaoli
2012-01-01
Bayesian Optimization aims at optimizing an unknown non-convex/concave function that is costly to evaluate. We are interested in application scenarios where concurrent function evaluations are possible. Under such a setting, BO could choose to either sequentially evaluate the function, one input at a time and wait for the output of the function before making the next selection, or evaluate the function at a batch of multiple inputs at once. These two different settings are commonly referred to as the sequential and batch settings of Bayesian Optimization. In general, the sequential setting leads to better optimization performance as each function evaluation is selected with more information, whereas the batch setting has an advantage in terms of the total experimental time (the number of iterations). In this work, our goal is to combine the strength of both settings. Specifically, we systematically analyze Bayesian optimization using Gaussian process as the posterior estimator and provide a hybrid algorithm t...
Bayesian least squares deconvolution
Ramos, A Asensio
2015-01-01
Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Bayesian Exploratory Factor Analysis
DEFF Research Database (Denmark)
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...
Center, Julian L.; Knuth, Kevin H.
2011-03-01
Visual odometry refers to tracking the motion of a body using an onboard vision system. Practical visual odometry systems combine the complementary accuracy characteristics of vision and inertial measurement units. The Mars Exploration Rovers, Spirit and Opportunity, used this type of visual odometry. The visual odometry algorithms in Spirit and Opportunity were based on Bayesian methods, but a number of simplifying approximations were needed to deal with onboard computer limitations. Furthermore, the allowable motion of the rover had to be severely limited so that computations could keep up. Recent advances in computer technology make it feasible to implement a fully Bayesian approach to visual odometry. This approach combines dense stereo vision, dense optical flow, and inertial measurements. As with all true Bayesian methods, it also determines error bars for all estimates. This approach also offers the possibility of using Micro-Electro Mechanical Systems (MEMS) inertial components, which are more economical, weigh less, and consume less power than conventional inertial components.
USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES
Directory of Open Access Journals (Sweden)
Otgon Cristian
2011-07-01
Full Text Available The paper aims to approach innovative financial instruments for the management of public resources. In the category of these innovative tools have been included polynomial spline functions used for budgetary sizing in the substantiating of fiscal and budgetary policies. In order to use polynomial spline functions there have been made a number of steps consisted in the establishment of nodes, the calculation of specific coefficients corresponding to the spline functions, development and determination of errors of approximation. Also in this paper was done extrapolation of series of property tax data using polynomial spline functions of order I. For spline impelementation were taken two series of data, one reffering to property tax as a resultative variable and the second one reffering to building tax, resulting a correlation indicator R=0,95. Moreover the calculation of spline functions are easy to solve and due to small errors of approximation have a great power of predictibility, much better than using ordinary least squares method. In order to realise the research there have been used as methods of research several steps, namely observation, series of data construction and processing the data with spline functions. The data construction is a daily series gathered from the budget account, reffering to building tax and property tax. The added value of this paper is given by the possibility of avoiding deficits by using spline functions as innovative instruments in the publlic finance, the original contribution is made by the average of splines resulted from the series of data. The research results lead to conclusion that the polynomial spline functions are recommended to form the elaboration of fiscal and budgetary policies, due to relatively small errors obtained in the extrapolation of economic processes and phenomena. Future research directions are taking in consideration to study the polynomial spline functions of second-order, third
On the role of exponential splines in image interpolation.
Kirshner, Hagai; Porat, Moshe
2009-10-01
A Sobolev reproducing-kernel Hilbert space approach to image interpolation is introduced. The underlying kernels are exponential functions and are related to stochastic autoregressive image modeling. The corresponding image interpolants can be implemented effectively using compactly-supported exponential B-splines. A tight l(2) upper-bound on the interpolation error is then derived, suggesting that the proposed exponential functions are optimal in this regard. Experimental results indicate that the proposed interpolation approach with properly-tuned, signal-dependent weights outperforms currently available polynomial B-spline models of comparable order. Furthermore, a unified approach to image interpolation by ideal and nonideal sampling procedures is derived, suggesting that the proposed exponential kernels may have a significant role in image modeling as well. Our conclusion is that the proposed Sobolev-based approach could be instrumental and a preferred alternative in many interpolation tasks.
Regional Ionosphere Mapping with Kriging and B-spline Methods
Grynyshyna-Poliuga, O.; Stanislawska, I. M.
2013-12-01
This work demonstrates the concept and practical examples of mapping of regional ionosphere, based on GPS observations from the EGNOS Ranging and Integrity Monitoring Stations (RIMS) network and permanent stations near to them. Interpolation/prediction techniques, such as kriging (KR) and the cubic B-spline, which are suitable for handling multi-scale phenomena and unevenly distributed data, were used to create total electron content (TEC) maps. Their computational efficiency (especially the B-spline) and the ability to handle undersampled data (especially kriging) are particularly attractive. The data sets have been collect into seasonal bins representing June, December solstices and equinox (March, September). TEC maps have a spatial resolution of 2.50 and 2.50 in latitude and longitude, respectively, and a 15-minutes temporal resolution. The time series of the TEC maps can be used to derive average monthly maps describing major ionospheric trends as a function of time, season, and spatial location.
Probabilistic Inferences in Bayesian Networks
Ding, Jianguo
2010-01-01
This chapter summarizes the popular inferences methods in Bayesian networks. The results demonstrates that the evidence can propagated across the Bayesian networks by any links, whatever it is forward or backward or intercausal style. The belief updating of Bayesian networks can be obtained by various available inference techniques. Theoretically, exact inferences in Bayesian networks is feasible and manageable. However, the computing and inference is NP-hard. That means, in applications, in ...
Almeida, Nuno; Friboulet, Denis; Sarvari, Sebastian Imre; Bernard, Olivier; Barbosa, Daniel; Samset, Eigil; Dhooge, Jan
2016-02-01
Segmentation of the left atrium (LA) of the heart allows quantification of LA volume dynamics which can give insight into cardiac function. However, very little attention has been given to LA segmentation from three-dimensional (3-D) ultrasound (US), most efforts being focused on the segmentation of the left ventricle (LV). The B-spline explicit active surfaces (BEAS) framework has been shown to be a very robust and efficient methodology to perform LV segmentation. In this study, we propose an extension of the BEAS framework, introducing B-splines with uncoupled scaling. This formulation improves the shape support for less regular and more variable structures, by giving independent control over smoothness and number of control points. Semiautomatic segmentation of the LA endocardium using this framework was tested in a setup requiring little user input, on 20 volumetric sequences of echocardiographic data from healthy subjects. The segmentation results were evaluated against manual reference delineations of the LA. Relevant LA morphological and functional parameters were derived from the segmented surfaces, in order to assess the performance of the proposed method on its clinical usage. The results showed that the modified BEAS framework is capable of accurate semiautomatic LA segmentation in 3-D transthoracic US, providing reliable quantification of the LA morphology and function.
Energy Technology Data Exchange (ETDEWEB)
Vasconcelos, Geovane Vitor; Dantas, Carlos Costa, E-mail: geovitor@bol.com.b, E-mail: ccd@ufpe.b [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear. Grupo de Radioquimica; Melo, Silvio de Barros; Pires, Renan Ferraz, E-mail: sbm@cin.ufpe.b, E-mail: rfp@cin.ufpe.b [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Centro de Informatica
2009-07-01
The 3D tomography reconstruction has been a profitable alternative in the analysis of the FCC-type- riser (Fluid Catalytic Cracking), for appropriately keeping track of the sectional catalyst concentration distribution in the process of oil refining. The method of tomography reconstruction proposed by M. Azzi and colleagues (1991) uses a relatively small amount of trajectories (from 3 to 5) and projections (from 5 to 7) of gamma rays, a desirable feature in the industrial process tomography. Compared to more popular methods, such as the FBP (Filtered Back Projection), which demands a much higher amount of gamma rays projections, the method by Azzi et al. is more appropriate for the industrial process, where the physical limitations and the cost of the process require more economical arrangements. The use of few projections and trajectories facilitates the diagnosis in the flow dynamical process. This article proposes an improvement in the basis functions introduced by Azzi et al., through the use of quadratic B-splines functions. The use of B-splines functions makes possible a smoother surface reconstruction of the density distribution, since the functions are continuous and smooth. This work describes how the modeling can be done. (author)
Côrtes, A.M.A.
2016-10-01
The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence-free. When applied to the discretized Stokes problem, these spaces generate a symmetric and indefinite saddle-point linear system. The iterative method of choice to solve such system is the Generalized Minimum Residual Method. This method lacks robustness, and one remedy is to use preconditioners. For linear systems of saddle-point type, a large family of preconditioners can be obtained by using a block factorization of the system. In this paper, we show how the nesting of “black-box” solvers and preconditioners can be put together in a block triangular strategy to build a scalable block preconditioner for the Stokes system discretized by divergence-conforming B-splines. Besides the known cavity flow problem, we used for benchmark flows defined on complex geometries: an eccentric annulus and hollow torus of an eccentric annular cross-section.
Bayesian multiple target tracking
Streit, Roy L
2013-01-01
This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements
A B-spline Galerkin method for the Dirac equation
Froese Fischer, Charlotte; Zatsarinny, Oleg
2009-06-01
The B-spline Galerkin method is first investigated for the simple eigenvalue problem, y=-λy, that can also be written as a pair of first-order equations y=λz, z=-λy. Expanding both y(r) and z(r) in the B basis results in many spurious solutions such as those observed for the Dirac equation. However, when y(r) is expanded in the B basis and z(r) in the dB/dr basis, solutions of the well-behaved second-order differential equation are obtained. From this analysis, we propose a stable method ( B,B) basis for the Dirac equation and evaluate its accuracy by comparing the computed and exact R-matrix for a wide range of nuclear charges Z and angular quantum numbers κ. When splines of the same order are used, many spurious solutions are found whereas none are found for splines of different order. Excellent agreement is obtained for the R-matrix and energies for bound states for low values of Z. For high Z, accuracy requires the use of a grid with many points near the nucleus. We demonstrate the accuracy of the bound-state wavefunctions by comparing integrals arising in hyperfine interaction matrix elements with exact analytic expressions. We also show that the Thomas-Reiche-Kuhn sum rule is not a good measure of the quality of the solutions obtained by the B-spline Galerkin method whereas the R-matrix is very sensitive to the appearance of pseudo-states.
Control theory and splines, applied to signature storage
Enqvist, Per
1994-01-01
In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.
Adaptive Surface Reconstruction Based on Tensor Product Algebraic Splines
Institute of Scientific and Technical Information of China (English)
Xinghua Song; Falai Chen
2009-01-01
Surface reconstruction from unorganized data points is a challenging problem in Computer Aided Design and Geometric Modeling. In this paper, we extend the mathematical model proposed by Juttler and Felis (Adv. Comput. Math., 17 (2002), pp. 135-152) based on tensor product algebraic spline surfaces from fixed meshes to adaptive meshes. We start with a tensor product algebraic B-spline surface defined on an initial mesh to fit the given data based on an optimization approach. By measuring the fitting errors over each cell of the mesh, we recursively insert new knots in cells over which the errors are larger than some given threshold, and construct a new algebraic spline surface to better fit the given data locally. The algorithm terminates when the error over each cell is less than the threshold. We provide some examples to demonstrate our algorithm and compare it with Jiittler's method. Examples suggest that our method is effective and is able to produce reconstruction surfaces of high quality.AMS subject classifications: 65D17
von Clarmann, T.
2014-09-01
The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the
Bayesian methods for hackers probabilistic programming and Bayesian inference
Davidson-Pilon, Cameron
2016-01-01
Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...
Hodograph computation and bound estimation for rational B-spline curves
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
It is necessary to compute the derivative and estimate the bound of rational B-spline curves in design system, which has not been studied to date. To improve the function of computer aided design (CAD) system, and to enhance the efficiency of different algorithms of rational B-spline curves, the representation of scaled hodograph and bound of derivative magnitude of uniform planar rational B-spline curves are derived by applying Dir function, which indicates the direction of Cartesian vector between homogeneous points, discrete B-spline theory and the formula of translating the product into a summation of B-spline functions. As an application of the result above,upper bound of parametric distance between any two points in a uniform planar rational B-spline curve is further presented.
DEFF Research Database (Denmark)
Jensen, Finn Verner; Nielsen, Thomas Dyhre
2016-01-01
and edges. The nodes represent variables, which may be either discrete or continuous. An edge between two nodes A and B indicates a direct influence between the state of A and the state of B, which in some domains can also be interpreted as a causal relation. The wide-spread use of Bayesian networks...
DEFF Research Database (Denmark)
Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;
2015-01-01
A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...
基于C-B样条的三角形和四边形曲面生成%Triangular and quadrilateral surface construction using C-B spline
Institute of Scientific and Technical Information of China (English)
李薇; 吴卓奇; 荻原一郎
2012-01-01
This paper provides a different solution to represent basic smooth elements like triangular and quadrilateral surface patches from mesh using C-B spline curves. C-B spline curves are developed by the basis {sin t, cos t, t, 1}, and it overcomes some shortcomings of the B-spline and non-uniform rational B-splines (NURBS) model, for example, they have to increase unnecessary control point in order to satisfy the data grid topology, their derivative and integral are complex and tedious, their degrees are too high, and it is difficult to discuss their continuous conditions. How to develop C-B spline curves into surface becomes an important problem. In this paper, the interpolation operators are constructed by using side-vertex method and a convex combination of these operators is achieved. The C-B spline curves are developed into triangular and quadrilateral surface patches which can be used in reverse engineering of CAD.%文章给出了基于C-B样条的由网格数据产生三角形和四边形曲面片的方法,C-B样条是由基底函数{sin t,cos t,t,1}导出的一种新型样条曲线,它可以克服现在正在使用的B样条和有理B样条为了满足数据网格的拓扑结构而增加多余的控制点,求导求积分复杂繁琐,阶数过高,从而讨论其连续拼接时增加了困难等缺点,如何将它推广成曲面就成为一个重要问题.作者利用边-顶点方法构造插值算子,再将这些算子进行凸性组合,将C-B样条曲线推广成三角形曲面片和四边形曲面片,它可以用于CAD的逆向工程中散乱数据的曲面重构.
Souto, Nelson; Thuillier, Sandrine; Andrade-Campos, A.
2016-10-01
Nowadays, full-field measurement methods are largely used to acquire the strain field developed by heterogeneous mechanical tests. Recent material parameters identification strategies based on a single heterogeneous test have been proposed considering that an inhomogeneous strain field can lead to a more complete mechanical characterization of the sheet metals. The purpose of this work is the design of a heterogeneous test promoting an enhanced mechanical behavior characterization of thin metallic sheets, under several strain paths and strain amplitudes. To achieve this goal, a design optimization strategy finding the appropriate specimen shape of the heterogeneous test by using either B-Splines or cubic splines was developed. The influence of using approximation or interpolation curves, respectively, was investigated in order to determine the most effective approach for achieving a better shape design. The optimization process is guided by an indicator criterion which evaluates, quantitatively, the strain field information provided by the mechanical test. Moreover, the design of the heterogeneous test is based on the resemblance with the experimental reality, since a rigid tool leading to uniaxial loading path is used for applying the displacement in a similar way as universal standard testing machines. The results obtained reveal that the optimization strategy using B-Splines curve approximation led to a heterogeneous test providing larger strain field information for characterizing the mechanical behavior of sheet metals.
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
A fast direct point-by-point generating algorithm for B Spline curves and surfaces
Institute of Scientific and Technical Information of China (English)
LI Zhong; HAN Dan-fu
2005-01-01
Traditional generating algorithms for B Spline curves and surfaces require approximation methods where how to increment the parameter to get the best approximation is problematic; or they take the pixel-based method needing matrix transformation from B Spline representation to Bezier form. Here, a fast, direct point-by-point generating algorithm for B Spline curves and surfaces is presented. The algorithm does not need matrix transformation, can be used for uniform or nonuniform B Spline curves and surfaces of any degree, and has high generating speed and good rendering accuracy.
Trivariate Polynomial Natural Spline for 3D Scattered Data Hermit Interpolation
Institute of Scientific and Technical Information of China (English)
XU YING-XIANG; GUAN L(U)-TAI; XU WEI-ZHI
2012-01-01
Consider a kind of Hermit interpolation for scattered data of 3D by trivariate polynomial natural spline,such that the objective energy functional (with natural boundary conditions) is minimal.By the spline function methods in Hilbert space and variational theory of splines,the characters of the interpolation solution and how to construct it are studied.One can easily find that the interpolation solution is a trivariate polynomial natural spline.Its expression is simple and the coefficients can be decided by a linear system.Some numerical examples are presented to demonstrate our methods.
Automatic Shape Control of Triangular B-Splines of Arbitrary Topology
Institute of Scientific and Technical Information of China (English)
Ying He; Xian-Feng Gu; Hong Qin
2006-01-01
Triangular B-splines are powerful and flexible in modeling a broader class of geometric objects defined over arbitrary, non-rectangular domains. Despite their great potential and advantages in theory, practical techniques and computational tools with triangular B-splines are less-developed. This is mainly because users have to handle a large number of irregularly distributed control points over arbitrary triangulation. In this paper, an automatic and efficient method is proposed to generate visually pleasing, high-quality triangular B-splines of arbitrary topology. The experimental results on several real datasets show that triangular B-splines are powerful and effective in both theory and practice.
Stacy, J. E.
1984-01-01
Asymmetric spline surfaces appear useful for the design of high-quality general optical systems (systems without symmetries). A spline influence function defined as the actual surface resulting from a simple perturbation in the spline definition array shows that a subarea is independent of others four or more points away. Optimization methods presented in this paper are used to vary a reflective spline surface near the focal plane of a decentered Schmidt-Cassegrain to reduce rms spot radii by a factor of 3 across the field.
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
A new method for automatically constructing convexity-preserving interpolatory splines
Institute of Scientific and Technical Information of China (English)
PAN Yongjuan; WANG Guojin
2004-01-01
Constructing a convexity-preserving interpolating curve according to the given planar data points is a problem to be solved in computer aided geometric design (CAGD). So far, almost all methods must solve a system of equations or recur to a complicated iterative process, and most of them can only generate some function-form convexity-preserving interpolating curves which are unaccommodated with the parametric curves, commonly used in CAGD systems. In order to overcome these drawbacks, this paper proposes a new method that can automatically generate some parametric convexity-preserving polynomial interpolating curves but dispensing with solving any system of equations or going at any iterative computation. The main idea is to construct a family of interpolating spline curves first with the shape parameter a as its family parameter; then, using the positive conditions of Bernstein polynomial to respectively find a range in which the shape parameter a takes its value for two cases of global convex data points and piecewise convex data points so as to make the corresponding interpolating curves convexity-preserving and C2(or G1) continuous. The method is simple and convenient, and the resulting interpolating curves possess smooth distribution of curvature. Numerical examples illustrate the correctness and the validity of theoretical reasoning.
Online estimation of B-spline mixture models from TOF-PET list-mode data
Energy Technology Data Exchange (ETDEWEB)
Schretter, Colas; Kobbelt, Leif [RWTH Aachen Univ. (Germany). Computer Graphics Group; Sun, Jianyong [Nottingham Univ. (United Kingdom). Intelligent Modelling and Analysis Research Group
2011-07-01
In emission tomography, images are usually represented by regular grids of voxels or overlapping smooth image elements (blobs). Few other image models have been proposed like tetrahedral meshes or point clouds that are adapted to an anatomical image. This work proposes a practical sparse and continuous image model inspired from the field of parametric density estimation for Gaussian mixture models. The position, size, aspect ratio and orientation of each image element is optimized as well as its weight with a very fast online estimation method. Furthermore, the number of mixture components, hence the image resolution, is locally adapted according to the available data. The system model is represented in the same basis as image elements and captures time of flight and positron range effects in an exact way. Computations use apodized B-spline approximations of Gaussians and simple closed-form analytical expressions without any sampling or interpolation. In consequence, the reconstructed image never suffers from spurious aliasing artifacts. Noiseless images of the XCAT brain phantom were reconstructed from simulated data. (orig.)
Some extremal properties of multivariate polynomial splines in the metric Lp (Rd )
Institute of Scientific and Technical Information of China (English)
LlU; Yongping(
2001-01-01
［1］Li Chun, Infinite dimensional widths of function classes, J. Approx. Theory, 1992, 69(1): 15-34.［2］Luo Junbo, Liu Yongping, Average width and optimal recovery of some anisotropic classes of smooth functions defined on the Euclidean space Bd, Northeast Math. J. , 1999, 15(4): 423-432.［3］Schoenberg, I. J., Cardinal interpolation and spline functions Ⅱ. Interpolation of data of power growth, J. Approx. Theory, 1972, 6(4): 404-420.［4］Fang Gensun, Liu Yongping, Average width and optimal interpolation of the Sobolev-Wiener class Wpd (B) in the metric Lq(Y), J. Approx Theory, 1993, 74(3): 335-352.［5］Pinkus, A., N-widths in Approximation Theory, New York: Springer-Verlag, 1985.［6］Foumier, J. J. F., Stewart, J., Amalgams of Lp and lq, Bull. Amer. Math. Soc., 1985, 13(1): 1-12.
Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.
Petrinović, Davor; Brezović, Marko
2011-04-01
We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device.
Revealed smooth nontransitive preferences
DEFF Research Database (Denmark)
Keiding, Hans; Tvede, Mich
2013-01-01
consumption bundle, all strictly preferred bundles are more expensive than the observed bundle. Our main result is that data sets can be rationalized by a smooth nontransitive preference relation if and only if prices can normalized such that the law of demand is satisﬁed. Market data sets consist of ﬁnitely...... many observations of price vectors, lists of individual incomes and aggregate demands. We apply our main result to characterize market data sets consistent with equilibrium behaviour of pure-exchange economies with smooth nontransitive consumers....
Bayesian cloud detection for MERIS, AATSR, and their combination
Directory of Open Access Journals (Sweden)
A. Hollstein
2014-11-01
Full Text Available A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS, the Advanced Along-Track Scanning Radiometer (AATSR, and their combination. The cloud masks were designed to be numerically efficient and suited for the processing of large amounts of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient amounts of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
2017-01-01
Gene regulatory networks (GRNs) play an important role in cellular systems and are important for understanding biological processes. Many algorithms have been developed to infer the GRNs. However, most algorithms only pay attention to the gene expression data but do not consider the topology information in their inference process, while incorporating this information can partially compensate for the lack of reliable expression data. Here we develop a Bayesian group lasso with spike and slab priors to perform gene selection and estimation for nonparametric models. B-spline basis functions are used to capture the nonlinear relationships flexibly and penalties are used to avoid overfitting. Further, we incorporate the topology information into the Bayesian method as a prior. We present the application of our method on DREAM3 and DREAM4 datasets and two real biological datasets. The results show that our method performs better than existing methods and the topology information prior can improve the result. PMID:28133490
Directory of Open Access Journals (Sweden)
Yue Fan
2017-01-01
Full Text Available Gene regulatory networks (GRNs play an important role in cellular systems and are important for understanding biological processes. Many algorithms have been developed to infer the GRNs. However, most algorithms only pay attention to the gene expression data but do not consider the topology information in their inference process, while incorporating this information can partially compensate for the lack of reliable expression data. Here we develop a Bayesian group lasso with spike and slab priors to perform gene selection and estimation for nonparametric models. B-spline basis functions are used to capture the nonlinear relationships flexibly and penalties are used to avoid overfitting. Further, we incorporate the topology information into the Bayesian method as a prior. We present the application of our method on DREAM3 and DREAM4 datasets and two real biological datasets. The results show that our method performs better than existing methods and the topology information prior can improve the result.
The Research of Obstacle-avoiding Problem based on Minimum Variation B-spline%基于最小变量的B-样条避障问题研究
Institute of Scientific and Technical Information of China (English)
彭辉; 曾碧
2011-01-01
为解决移动机器人在避障时的曲线优化问题，提出了基于最小变量的B-样条避障的方法．对该方法从数学模型上进行了推导，指出了该方法相对于其他弘样条方法的优点，并对该方法进行了优化，给出了相应的优化算法．研究表明：具有最小变量的B-样条函数比只用肛样条函数定义的曲线具有更优化的线性约束，其曲线具有更好的光滑性．%To solve the mobile robot in obstacle avoidance of curve optimization problems , and put for- ward based on the minimum variable B-spline obstacle-avoidance approach. The method is carried out from the derived mathematical model, that the method compared to other B-spline interpolation method has the advantage . The method was optimized, and its iterative optimization algorithm is given. Results of the em- pirical investigation indicated that the Minimum Variation B-spline problem which is a linearly constrained optimization problem over curves defined by B-spline functions only, Its curve has better smoothness.
Bejancu, Aurelian
2006-12-01
This paper considers the problem of interpolation on a semi-plane grid from a space of box-splines on the three-direction mesh. Building on a new treatment of univariate semi-cardinal interpolation for natural cubic splines, the solution is obtained as a Lagrange series with suitable localization and polynomial reproduction properties. It is proved that the extension of the natural boundary conditions to box-spline semi-cardinal interpolation attains half of the approximation order of the cardinal case.
Smoothed Particle Hydrodynamic Simulator
Energy Technology Data Exchange (ETDEWEB)
2016-10-05
This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.
NUAH T-splines of Odd Bi-degree%双奇次NUAH T样条
Institute of Scientific and Technical Information of China (English)
段小娟; 汪国昭
2015-01-01
针对 T 样条无法精确表示双曲超越曲面的问题，构造了一种样条曲面——双奇次代数双曲 T 样条曲面(NUAH T样条)，探讨了其细分算法和调配函数的线性无关性。通过将非均匀代数双曲B样条曲面(NUAH B样条曲面)定义在T网上，给出了双奇次NUAH T样条的定义；基于NUAH B样条的节点插入公式，提出NUAH T样条的一种局部细分算法；并证明了NUAH T样条的调配函数线性无关的充要条件，即由NUAH T样条转化为NUAH B样条曲面的过渡矩阵是满秩矩阵。最后，通过实例验证了曲面构建和细分算法的有效性。%Since T-splines cannot represent hyperbolic spline surfaces exactly, this paper presents a kind of spline surfaces, called non-uniform algebraic hyperbolic T-spline surfaces (NUAH T-splines for short) of odd bi-degree. The NUAH T-splines are defined by applying the T-spline framework to the non-uniform al-gebraic hyperbolic B-spline surfaces (NUAH B-spline surfaces). Based on the knot insertion of NUAH B-splines, a local refinement algorithm for NUAH T-splines of odd bi-degree is shown. This paper proves that, for any NUAH T-spline of odd bi-degree, the linear independence of its blending functions can be de-termined by computing the rank of the NUAH T-spline-to-NUAH B-spline transformation matrix. Finally, the examples verify the effectiveness of the local refinement algorithm of NUAH T-splines.
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
DEFF Research Database (Denmark)
Mørup, Morten; Schmidt, Mikkel N
2012-01-01
Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model...... consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled....... for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities...
Adaptive Predistortion Using Cubic Spline Nonlinearity Based Hammerstein Modeling
Wu, Xiaofang; Shi, Jianghong
In this paper, a new Hammerstein predistorter modeling for power amplifier (PA) linearization is proposed. The key feature of the model is that the cubic splines, instead of conventional high-order polynomials, are utilized as the static nonlinearities due to the fact that the splines are able to represent hard nonlinearities accurately and circumvent the numerical instability problem simultaneously. Furthermore, according to the amplifier's AM/AM and AM/PM characteristics, real-valued cubic spline functions are utilized to compensate the nonlinear distortion of the amplifier and the following finite impulse response (FIR) filters are utilized to eliminate the memory effects of the amplifier. In addition, the identification algorithm of the Hammerstein predistorter is discussed. The predistorter is implemented on the indirect learning architecture, and the separable nonlinear least squares (SNLS) Levenberg-Marquardt algorithm is adopted for the sake that the separation method reduces the dimension of the nonlinear search space and thus greatly simplifies the identification procedure. However, the convergence performance of the iterative SNLS algorithm is sensitive to the initial estimation. Therefore an effective normalization strategy is presented to solve this problem. Simulation experiments were carried out on a single-carrier WCDMA signal. Results show that compared to the conventional polynomial predistorters, the proposed Hammerstein predistorter has a higher linearization performance when the PA is near saturation and has a comparable linearization performance when the PA is mildly nonlinear. Furthermore, the proposed predistorter is numerically more stable in all input back-off cases. The results also demonstrate the validity of the convergence scheme.
Bayesian Independent Component Analysis
DEFF Research Database (Denmark)
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...... in a Matlab toolbox, is demonstrated for non-negative decompositions and compared with non-negative matrix factorization....
Local Refinement of Analysis-Suitable T-splines
2011-03-01
analysis focused on establish- ing the behavior of the smooth NURBS basis in analysis. It was demonstrated that smooth- ness offers important...computational advantages over standard finite elements [3, 4]. Areas of application of NURBS -based isogeometric analysis include turbulence [5, 6, 7, 8], fluid...technologies and isogeometric analy- sis [30, 31, 32, 33, 34, 35, 36, 37]. While smoothness is an important consideration, NURBS are severely limited by their
The Adaptive LASSO Spline Estimation of Single-Index Model
Institute of Scientific and Technical Information of China (English)
LU Yiqiang; ZHANG Riquan; HU Bin
2016-01-01
In this paper,based on spline approximation,the authors propose a unified variable selection approach for single-index model via adaptive L1 penalty.The calculation methods of the proposed estimators are given on the basis of the known lars algorithm.Under some regular conditions,the authors demonstrate the asymptotic properties of the proposed estimators and the oracle properties of adaptive LASSO (aLASSO) variable selection.Simulations are used to investigate the performances of the proposed estimator and illustrate that it is effective for simultaneous variable selection as well as estimation of the single-index models.
Gravity Aided Navigation Precise Algorithm with Gauss Spline Interpolation
Directory of Open Access Journals (Sweden)
WEN Chaobin
2015-01-01
Full Text Available The gravity compensation of error equation thoroughly should be solved before the study on gravity aided navigation with high precision. A gravity aided navigation model construction algorithm based on research the algorithm to approximate local grid gravity anomaly filed with the 2D Gauss spline interpolation is proposed. Gravity disturbance vector, standard gravity value error and Eotvos effect are all compensated in this precision model. The experiment result shows that positioning accuracy is raised by 1 times, the attitude and velocity accuracy is raised by 1～2 times and the positional error is maintained from 100~200 m.
Bayesian theory and applications
Dellaportas, Petros; Polson, Nicholas G; Stephens, David A
2013-01-01
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...
Fitting Derivative Function Based on Penalized Regression Spline%基于惩罚回归样条的函数导数拟合
Institute of Scientific and Technical Information of China (English)
关海洋; 唐燕武; 杨联强
2015-01-01
在函数形式未知，而已知该函数的带误差的离散数据点情况下，运用基于 p次截断幂基的惩罚回归样条拟合数据点，并在拟合出的曲线基础上求出函数的一阶导数。该方法将经典最小二乘法和惩罚样条方法进行结合，既考虑了拟合优度，又兼顾拟合曲线的光滑性，模拟和实际应用的例子显示此种方法效果较理想。%When the function is not identified but its discrete data points are given , fitting function based on penalized spline with pth-degree truncated power basis is constructed, and the first derivative of function is given.The method combines classical ordinary least squares and penalized spline smoothing , both the goodness and the smoothness of fitting curve are considered , simu-lations and application show its good efficiency .
A class of compactly supported symmetric/antisymmetric B-spline wavelets
Institute of Scientific and Technical Information of China (English)
YANG Shouzhi; LOU Zengjian
2005-01-01
An algorithm for constructing a class of compactly supported symmetric/antisymmetric B-spline wavelets is presented.For any m th order and k th order cardinal B-spline Nm (x), Nk (x), if m + k is an even integer, the corresponding m th order B-spline wavelets ψkm (x) can be constructed, which are compactly supported symmetric/antisymmetric. In addition, if ψkm (x), m ＞ 1 is m th Bspline wavelet associated with two spline functions Nm (x) and Nk (x), then (ψkm (x))′( x ) is m - 1th B-spline wavelet associated with Nm-1(x) and Nk+1(x), i.e. (ψkm(x))′(x) =22ψk+1m-1(x). Similarly, ∫x0 ψkm(t)dt, k ＞1 is m + 1th B-spline wavelet associated with Nm + 1 (x) and Nk-1 (x). Using this method, we recovered Chui and Wang' s spline wavelets. Since a class of B-spline wavelets are symmetric/antisymmetric, their linear phase property is assured. Several examples are also presented.
On O(1) Gaussian filtering using box splines
Chauhdury, Kunal N
2011-01-01
It is well-known that box filters can be efficiently computed using pre-integrations and local finite-differences [Crow1984,Heckbert1986,Viola2001]. Several image processing algorithms based on this idea have been proposed in the literature. By generalizing this idea and by combining it with a non-standard variant of the Central Limit Theorem, a constant-time or O(1) algorithm was proposed in [Chaudhury2010] that allowed one to perform space-variant filtering using Gaussian-like kernels . The algorithm was based on the observation that both isotropic and anisotropic Gaussians could be approximated using certain bivariate splines called box splines. The attractive feature of the algorithm was that it allowed one to continuously control the shape and size of the filter, and that it had a fixed computational cost per pixel, irrespective of the size of the filter. The algorithm, however, had the drawback that it offered only a limited control on the covariance and accuracy of the Gaussian approximation. In this w...
Indian Academy of Sciences (India)
Benedictus Margaux
2015-05-01
Let be a scheme. Assume that we are given an action of the one dimensional split torus $\\mathbb{G}_{m,S}$ on a smooth affine -scheme $\\mathfrak{X}$. We consider the limit (also called attractor) subfunctor $\\mathfrak{X}_{}$ consisting of points whose orbit under the given action `admits a limit at 0’. We show that $\\mathfrak{X}_{}$ is representable by a smooth closed subscheme of $\\mathfrak{X}$. This result generalizes a theorem of Conrad et al. (Pseudo-reductive groups (2010) Cambridge Univ. Press) where the case when $\\mathfrak{X}$ is an affine smooth group and $\\mathbb{G}_{m,S}$ acts as a group automorphisms of $\\mathfrak{X}$ is considered. It also occurs as a special case of a recent result by Drinfeld on the action of $\\mathbb{G}_{m,S}$ on algebraic spaces (Proposition 1.4.20 of Drinfeld V, On algebraic spaces with an action of $\\mathfrak{G}_{m}$, preprint 2013) in case is of finite type over a field.
Smooth Neighborhood Structures in a Smooth Topological Spaces
Directory of Open Access Journals (Sweden)
A. A. Ramadan
2010-01-01
Full Text Available Various concepts related to a smooth topological spaces have been introduced and relations among them studied by several authors (Chattopadhyay, Ramadan, etc. In this study, we presented the notions of three sorts of neighborhood structures of a smooth topological spaces and give some of their properties which are results by Ying extended to smooth topological spaces.
A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.
Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin
2009-05-01
We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces.
... gov/ency/article/003531.htm Anti-smooth muscle antibody To use the sharing features on this page, please enable JavaScript. Anti-smooth muscle antibody is a blood test that detects the presence ...
DEFF Research Database (Denmark)
Hartelius, Karsten; Carstensen, Jens Michael
2003-01-01
A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...
Congdon, Peter
2014-01-01
This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU
Bayesian nonparametric data analysis
Müller, Peter; Jara, Alejandro; Hanson, Tim
2015-01-01
This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.
Application of the Bayesian dynamic survival model in medicine.
He, Jianghua; McGee, Daniel L; Niu, Xufeng
2010-02-10
The Bayesian dynamic survival model (BDSM), a time-varying coefficient survival model from the Bayesian prospective, was proposed in early 1990s but has not been widely used or discussed. In this paper, we describe the model structure of the BDSM and introduce two estimation approaches for BDSMs: the Markov Chain Monte Carlo (MCMC) approach and the linear Bayesian (LB) method. The MCMC approach estimates model parameters through sampling and is computationally intensive. With the newly developed geoadditive survival models and software BayesX, the BDSM is available for general applications. The LB approach is easier in terms of computations but it requires the prespecification of some unknown smoothing parameters. In a simulation study, we use the LB approach to show the effects of smoothing parameters on the performance of the BDSM and propose an ad hoc method for identifying appropriate values for those parameters. We also demonstrate the performance of the MCMC approach compared with the LB approach and a penalized partial likelihood method available in software R packages. A gastric cancer trial is utilized to illustrate the application of the BDSM.
EMBEDDING FLOWS AND SMOOTH CONJUGACY
Institute of Scientific and Technical Information of China (English)
ZHANGMEIRONG; LIWEIGU
1997-01-01
The authors use the functional equation for embedding vector fields to study smooth embedding flows of one-dimensional diffeomorphisms. The existence and uniqueness for smooth embedding flows and vector fields are proved. As an application of embedding flows, some classification results about local and giobal diffeomorphisms under smooth conjugacy are given.
Classification using Bayesian neural nets
J.C. Bioch (Cor); O. van der Meer; R. Potharst (Rob)
1995-01-01
textabstractRecently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to neura
Bayesian Intersubjectivity and Quantum Theory
Pérez-Suárez, Marcos; Santos, David J.
2005-02-01
Two of the major approaches to probability, namely, frequentism and (subjectivistic) Bayesian theory, are discussed, together with the replacement of frequentist objectivity for Bayesian intersubjectivity. This discussion is then expanded to Quantum Theory, as quantum states and operations can be seen as structural elements of a subjective nature.
Bayesian Approach for Inconsistent Information.
Stein, M; Beer, M; Kreinovich, V
2013-10-01
In engineering situations, we usually have a large amount of prior knowledge that needs to be taken into account when processing data. Traditionally, the Bayesian approach is used to process data in the presence of prior knowledge. Sometimes, when we apply the traditional Bayesian techniques to engineering data, we get inconsistencies between the data and prior knowledge. These inconsistencies are usually caused by the fact that in the traditional approach, we assume that we know the exact sample values, that the prior distribution is exactly known, etc. In reality, the data is imprecise due to measurement errors, the prior knowledge is only approximately known, etc. So, a natural way to deal with the seemingly inconsistent information is to take this imprecision into account in the Bayesian approach - e.g., by using fuzzy techniques. In this paper, we describe several possible scenarios for fuzzifying the Bayesian approach. Particular attention is paid to the interaction between the estimated imprecise parameters. In this paper, to implement the corresponding fuzzy versions of the Bayesian formulas, we use straightforward computations of the related expression - which makes our computations reasonably time-consuming. Computations in the traditional (non-fuzzy) Bayesian approach are much faster - because they use algorithmically efficient reformulations of the Bayesian formulas. We expect that similar reformulations of the fuzzy Bayesian formulas will also drastically decrease the computation time and thus, enhance the practical use of the proposed methods.
Inference in hybrid Bayesian networks
DEFF Research Database (Denmark)
Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....
Bayesian Inference on Gravitational Waves
Directory of Open Access Journals (Sweden)
Asad Ali
2015-12-01
Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.
Approximate Bayesian computation.
Directory of Open Access Journals (Sweden)
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
Rediscovery of Good-Turing estimators via Bayesian nonparametrics.
Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye
2016-03-01
The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library.
Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System
Directory of Open Access Journals (Sweden)
Guang Zhao
2008-01-01
Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.
Borsboom, D.; Haig, B.D.
2013-01-01
Unlike most other statistical frameworks, Bayesian statistical inference is wedded to a particular approach in the philosophy of science (see Howson & Urbach, 2006); this approach is called Bayesianism. Rather than being concerned with model fitting, this position in the philosophy of science primar
Cubic Spline Interpolation Reveals Different Evolutionary Trends of Various Species
Directory of Open Access Journals (Sweden)
Li Zhiqiang
2016-01-01
Full Text Available Instead of being uniform in each branch of the biological evolutionary tree, the speed of evolution, measured in the number of mutations over a fixed number of years, seems to be much faster or much slower than average in some branches of the evolutionary tree. This paper describes an evolutionary trend discovery algorithm that uses cubic spline interpolation for various branches of the evolutionary tree. As shown in an example, within the vertebrate evolutionary tree, human evolution seems to be currently speeding up while the evolution of chickens is slowing down. The new algorithm can automatically identify those branches and times when something unusual has taken place, aiding data analytics of evolutionary data.
Prediction of longitudinal dispersion coefficient using multivariate adaptive regression splines
Indian Academy of Sciences (India)
Amir Hamzeh Haghiabi
2016-07-01
In this paper, multivariate adaptive regression splines (MARS) was developed as a novel soft-computingtechnique for predicting longitudinal dispersion coefficient (DL) in rivers. As mentioned in the literature,experimental dataset related to DL was collected and used for preparing MARS model. Results of MARSmodel were compared with multi-layer neural network model and empirical formulas. To define the mosteffective parameters on DL, the Gamma test was used. Performance of MARS model was assessed bycalculation of standard error indices. Error indices showed that MARS model has suitable performanceand is more accurate compared to multi-layer neural network model and empirical formulas. Results ofthe Gamma test and MARS model showed that flow depth (H) and ratio of the mean velocity to shearvelocity (u/u^∗) were the most effective parameters on the DL.
Fast Selection of Spectral Variables with B-Spline Compression
Rossi, Fabrice; Wertz, Vincent; Meurens, Marc; Verleysen, Michel
2007-01-01
The large number of spectral variables in most data sets encountered in spectral chemometrics often renders the prediction of a dependent variable uneasy. The number of variables hopefully can be reduced, by using either projection techniques or selection methods; the latter allow for the interpretation of the selected variables. Since the optimal approach of testing all possible subsets of variables with the prediction model is intractable, an incremental selection approach using a nonparametric statistics is a good option, as it avoids the computationally intensive use of the model itself. It has two drawbacks however: the number of groups of variables to test is still huge, and colinearities can make the results unstable. To overcome these limitations, this paper presents a method to select groups of spectral variables. It consists in a forward-backward procedure applied to the coefficients of a B-Spline representation of the spectra. The criterion used in the forward-backward procedure is the mutual infor...
Spline-based automatic path generation of welding robot
Institute of Scientific and Technical Information of China (English)
Niu Xuejuan; Li Liangyu
2007-01-01
This paper presents a flexible method for the representation of welded seam based on spline interpolation. In this method, the tool path of welding robot can be generated automatically from a 3D CAD model. This technique has been implemented and demonstrated in the FANUC Arc Welding Robot Workstation. According to the method, a software system is developed using VBA of SolidWorks 2006. It offers an interface between SolidWorks and ROBOGUIDE, the off-line programming software of FANUC robot. It combines the strong modeling function of the former and the simulating function of the latter. It also has the capability of communication with on-line robot. The result data have shown its high accuracy and strong reliability in experiments. This method will improve the intelligence and the flexibility of the welding robot workstation.
Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation
Directory of Open Access Journals (Sweden)
Budi I. Setiawan
2007-09-01
Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.
Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data
Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej
2016-04-01
GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.
Implementing Bayesian Vector Autoregressions Implementing Bayesian Vector Autoregressions
Directory of Open Access Journals (Sweden)
Richard M. Todd
1988-03-01
Full Text Available Implementing Bayesian Vector Autoregressions This paper discusses how the Bayesian approach can be used to construct a type of multivariate forecasting model known as a Bayesian vector autoregression (BVAR. In doing so, we mainly explain Doan, Littermann, and Sims (1984 propositions on how to estimate a BVAR based on a certain family of prior probability distributions. indexed by a fairly small set of hyperparameters. There is also a discussion on how to specify a BVAR and set up a BVAR database. A 4-variable model is used to iliustrate the BVAR approach.
Bayesian tomography and integrated data analysis in fusion diagnostics
Li, Dong; Dong, Y. B.; Deng, Wei; Shi, Z. B.; Fu, B. Z.; Gao, J. M.; Wang, T. B.; Zhou, Yan; Liu, Yi; Yang, Q. W.; Duan, X. R.
2016-11-01
In this article, a Bayesian tomography method using non-stationary Gaussian process for a prior has been introduced. The Bayesian formalism allows quantities which bear uncertainty to be expressed in the probabilistic form so that the uncertainty of a final solution can be fully resolved from the confidence interval of a posterior probability. Moreover, a consistency check of that solution can be performed by checking whether the misfits between predicted and measured data are reasonably within an assumed data error. In particular, the accuracy of reconstructions is significantly improved by using the non-stationary Gaussian process that can adapt to the varying smoothness of emission distribution. The implementation of this method to a soft X-ray diagnostics on HL-2A has been used to explore relevant physics in equilibrium and MHD instability modes. This project is carried out within a large size inference framework, aiming at an integrated analysis of heterogeneous diagnostics.
Symmetric alteration of four knots of B-spline and NURBS surfaces
Institute of Scientific and Technical Information of China (English)
LI Ya-juan; WANG Guo-zhao
2006-01-01
Modifying the knots ofa B-spline curve, the shape of the curve will be changed. In this paper, we present the effect of the symmetric alteration of four knots of the B-spline and the NURBS surfaces, i.e., symmetrical alteration of the knots of surface,the extended paths of points of the surface will converge to a point which should be expressed with several control points. This theory can be used in the constrained shape modification of B-spline and NURBS surfaces.
Energy Technology Data Exchange (ETDEWEB)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.; Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.
2008-07-14
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensity that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting
PH-spline approximation for Bézier curve and rendering offset
Institute of Scientific and Technical Information of China (English)
郑志浩; 汪国昭
2004-01-01
In this paper,a G1, C1, C2 PH-spline is employed as an approximation for a given Bézier curve within error bound and further renders offset which can be regarded as an approximate offset to the Bézier curve. The errors between PH-spline and the Bézier curve, the offset to PH-spline and the offset to the given Bézier curve are also estimated. A new algorithm for constructing offset to the Bézier curve is proposed.
B-splines on 3-D tetrahedron partition in four-directional mesh
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
It is more difficult to construct 3-D splines than in 2-D case. Some results in the three directional meshes of bivariate case have been e xtended to 3-D case and corresponding tetrahedron partition has been constructed. The support of related Bsplines and their recurrent formulas on integration and differentiationdifference are obtained. The results of this paper can be extended into higher dimension spaces, and can be also used in wavelet analysis, because of the relationship between spline and wavelets.
Energy Technology Data Exchange (ETDEWEB)
Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael
2000-04-11
A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy
Book review: Bayesian analysis for population ecology
Link, William A.
2011-01-01
Brian Dennis described the field of ecology as “fertile, uncolonized ground for Bayesian ideas.” He continued: “The Bayesian propagule has arrived at the shore. Ecologists need to think long and hard about the consequences of a Bayesian ecology. The Bayesian outlook is a successful competitor, but is it a weed? I think so.” (Dennis 2004)
Ortega, Pedro A
2011-01-01
Discovering causal relationships is a hard task, often hindered by the need for intervention, and often requiring large amounts of data to resolve statistical uncertainty. However, humans quickly arrive at useful causal relationships. One possible reason is that humans use strong prior knowledge; and rather than encoding hard causal relationships, they encode beliefs over causal structures, allowing for sound generalization from the observations they obtain from directly acting in the world. In this work we propose a Bayesian approach to causal induction which allows modeling beliefs over multiple causal hypotheses and predicting the behavior of the world under causal interventions. We then illustrate how this method extracts causal information from data containing interventions and observations.
Blundell, Charles; Heller, Katherine A
2012-01-01
Hierarchical structure is ubiquitous in data across many domains. There are many hier- archical clustering methods, frequently used by domain experts, which strive to discover this structure. However, most of these meth- ods limit discoverable hierarchies to those with binary branching structure. This lim- itation, while computationally convenient, is often undesirable. In this paper we ex- plore a Bayesian hierarchical clustering algo- rithm that can produce trees with arbitrary branching structure at each node, known as rose trees. We interpret these trees as mixtures over partitions of a data set, and use a computationally efficient, greedy ag- glomerative algorithm to find the rose trees which have high marginal likelihood given the data. Lastly, we perform experiments which demonstrate that rose trees are better models of data than the typical binary trees returned by other hierarchical clustering algorithms.
Bayesian inference in geomagnetism
Backus, George E.
1988-01-01
The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.
Classification of smooth Fano polytopes
DEFF Research Database (Denmark)
Øbro, Mikkel
Fano polytopes up to isomorphism. A smooth Fano -polytope can have at most vertices. In case of vertices an explicit classification is known. The thesis contains the classification in case of vertices. Classifications of smooth Fano -polytopes for fixed exist only for . In the thesis an algorithm...... for the classification of smooth Fano -polytopes for any given is presented. The algorithm has been implemented and used to obtain the complete classification for .......A simplicial lattice polytope containing the origin in the interior is called a smooth Fano polytope, if the vertices of every facet is a basis of the lattice. The study of smooth Fano polytopes is motivated by their connection to toric varieties. The thesis concerns the classification of smooth...
Current trends in Bayesian methodology with applications
Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia
2015-01-01
Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on
Introduction of b-splines to trajectory planning for robot manipulators
Directory of Open Access Journals (Sweden)
Per E. Koch
1988-04-01
Full Text Available This paper describes how B-splines can be used to construct joint trajectories for robot manipulators. The motion is specified by a sequence of Cartesian knots, i.e., positions and orientations of the end effector of a robot manipulator. For a six joint robot manipulator, these Cartesian knots are transformed into six sets of joint variables, with each set corresponding to a joint. Splines, represented as linear combinations of B-splines, are used to fit the sequence of joint variables for each of the six joints. A computationally very simple, recurrence formula is used to generate the 8-splines. This approach is used for the first time to establish the mathematical model of trajectory generation for robot manipulators, and offers flexibility, computational efficiency, and a compact representation.
MECHANICS ANALYSIS ON PRECISE FORMING PROCESS OF EXTERNAL SPLINE COLD ROLLING
Institute of Scientific and Technical Information of China (English)
ZHANG Dawei; LI Yongtang; FU Jianhua; ZHENG Quangang
2007-01-01
According to the suitable assumption, the deformation process of external spline cold rolling is analyzed. By the graphing method, the slip-line field of plastically deforming area in process of external spline cold rolling is set up. Different friction-conditions are used in different contact areas in order to realistically reflect the actual situation. The unit average pressure on contact surface of the rolling process is solved according to the stress filed theory of slip-line. And the formulae of the rolling-force and rolling-moment are established. The theoretical result is well consistent with the finite element analysis. A theoretical basis is provided for the precise forming process of spline cold rolling and the production of external splined shafts.
Nonlinear Spline Kernel-based Partial Least Squares Regression Method and Its Application
Institute of Scientific and Technical Information of China (English)
JIA Jin-ming; WEN Xiang-jun
2008-01-01
Inspired by the traditional Wold's nonlinear PLS algorithm comprises of NIPALS approach and a spline inner function model,a novel nonlinear partial least squares algorithm based on spline kernel(named SK-PLS)is proposed for nonlinear modeling in the presence of multicollinearity.Based on the iuner-product kernel spanned by the spline basis functions with infinite numher of nodes,this method firstly maps the input data into a high dimensional feature space,and then calculates a linear PLS model with reformed NIPALS procedure in the feature space and gives a unified framework of traditional PLS"kernel"algorithms in consequence.The linear PLS in the feature space corresponds to a nonlinear PLS in the original input (primal)space.The good approximating property of spline kernel function enhances the generalization ability of the novel model,and two numerical experiments are given to illustrate the feasibility of the proposed method.
A B-spline method used to calculate added resistance in waves
Zangeneh, Razieh; Ghiasi, Mahmood
2017-01-01
Making an exact computation of added resistance in sea waves is of high interest due to the economic effects relating to ship design and operation. In this paper, a B-spline based method is developed for computation of added resistance. Based on the potential flow assumption, the velocity potential is computed using Green's formula. The Kochin function is applied to compute added resistance using Maruo's far-field method, the body surface is described by a B-spline curve and potentials and normal derivation of potentials are also described by B-spline basis functions and B-spline derivations. A collocation approach is applied for numerical computation, and integral equations are then evaluated by applying Gauss-Legendre quadrature. Computations are performed for a spheroid and different hull forms; results are validated by a comparison with experimental results. All results obtained with the present method show good agreement with experimental results.
Bayesian analysis of the dynamic structure in China's economic growth
Kyo, Koki; Noda, Hideo
2008-11-01
To analyze the dynamic structure in China's economic growth during the period 1952-1998, we introduce a model of the aggregate production function for the Chinese economy that considers total factor productivity (TFP) and output elasticities as time-varying parameters. Specifically, this paper is concerned with the relationship between the rate of economic growth in China and the trend in TFP. Here, we consider the time-varying parameters as random variables and introduce smoothness priors to construct a set of Bayesian linear models for parameter estimation. The results of the estimation are in agreement with the movements in China's social economy, thus illustrating the validity of the proposed methods.
Bayesian Age-Period-Cohort Modeling and Prediction - BAMP
Directory of Open Access Journals (Sweden)
Volker J. Schmid
2007-10-01
Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.
Mautz, R.; Ping, J.; Heki, K.; Schaffrin, B.; Shum, C.; Potts, L.
2005-05-01
Wavelet expansion has been demonstrated to be suitable for the representation of spatial functions. Here we propose the so-called B-spline wavelets to represent spatial time-series of GPS-derived global ionosphere maps (GIMs) of the vertical total electron content (TEC) from the Earth’s surface to the mean altitudes of GPS satellites, over Japan. The scalar-valued B-spline wavelets can be defined in a two-dimensional, but not necessarily planar, domain. Generated by a sequence of knots, different degrees of B-splines can be implemented: degree 1 represents the Haar wavelet; degree 2, the linear B-spline wavelet, or degree 4, the cubic B-spline wavelet. A non-uniform version of these wavelets allows us to handle data on a bounded domain without any edge effects. B-splines are easily extended with great computational efficiency to domains of arbitrary dimensions, while preserving their properties. This generalization employs tensor products of B-splines, defined as linear superposition of products of univariate B-splines in different directions. The data and model may be identical at the locations of the data points if the number of wavelet coefficients is equal to the number of grid points. In addition, data compression is made efficient by eliminating the wavelet coefficients with negligible magnitudes, thereby reducing the observational noise. We applied the developed methodology to the representation of the spatial and temporal variations of GIM from an extremely dense GPS network, the GPS Earth Observation Network (GEONET) in Japan. Since the sampling of the TEC is registered regularly in time, we use a two-dimensional B-spline wavelet representation in space and a one-dimensional spline interpolation in time. Over the Japan region, the B-spline wavelet method can overcome the problem of bias for the spherical harmonic model at the boundary, caused by the non-compact support. The hierarchical decomposition not only allows an inexpensive calculation, but also
Calibration using constrained smoothing with applications to mass spectrometry data.
Feng, Xingdong; Sedransk, Nell; Xia, Jessie Q
2014-06-01
Linear regressions are commonly used to calibrate the signal measurements in proteomic analysis by mass spectrometry. However, with or without a monotone (e.g., log) transformation, data from such functional proteomic experiments are not necessarily linear or even monotone functions of protein (or peptide) concentration except over a very restricted range. A computationally efficient spline procedure improves upon linear regression. However, mass spectrometry data are not necessarily homoscedastic; more often the variation of measured concentrations increases disproportionately near the boundaries of the instruments measurement capability (dynamic range), that is, the upper and lower limits of quantitation. These calibration difficulties exist with other applications of mass spectrometry as well as with other broad-scale calibrations. Therefore the method proposed here uses a functional data approach to define the calibration curve and also the limits of quantitation under the two assumptions: (i) that the variance is a bounded, convex function of concentration; and (ii) that the calibration curve itself is monotone at least between the limits of quantitation, but not necessarily outside these limits. Within this paradigm, the limit of detection, where the signal is definitely present but not measurable with any accuracy, is also defined. An iterative approach draws on existing smoothing methods to account simultaneously for both restrictions and is shown to achieve the global optimal convergence rate under weak conditions. This approach can also be implemented when convexity is replaced by other (bounded) restrictions. Examples from Addona et al. (2009, Nature Biotechnology 27, 663-641) both motivate and illustrate the effectiveness of this functional data methodology when compared with the simpler linear regressions and spline techniques.
Irregular-Time Bayesian Networks
Ramati, Michael
2012-01-01
In many fields observations are performed irregularly along time, due to either measurement limitations or lack of a constant immanent rate. While discrete-time Markov models (as Dynamic Bayesian Networks) introduce either inefficient computation or an information loss to reasoning about such processes, continuous-time Markov models assume either a discrete state space (as Continuous-Time Bayesian Networks), or a flat continuous state space (as stochastic dif- ferential equations). To address these problems, we present a new modeling class called Irregular-Time Bayesian Networks (ITBNs), generalizing Dynamic Bayesian Networks, allowing substantially more compact representations, and increasing the expressivity of the temporal dynamics. In addition, a globally optimal solution is guaranteed when learning temporal systems, provided that they are fully observed at the same irregularly spaced time-points, and a semiparametric subclass of ITBNs is introduced to allow further adaptation to the irregular nature of t...
Isogeometric Divergence-conforming B-splines for the Steady Navier-Stokes Equations
2012-04-01
geometrical mapping meeting our criteria could be defined utilizing B-splines or Non- Uniform Rational B-Splines ( NURBS ) on the coarsest mesh Mh0 . For...examples of such mappings, see Chapter 2 of [13]. NURBS mappings are especially useful as they can represent many geometries of scientific and...complications that are beyond the scope of this work. We would like to note that all four assumptions hold if we employ a conforming NURBS multi-patch
Final report: PITA-18 use of nonpoisonous splines for longitudinal flux traversing
Energy Technology Data Exchange (ETDEWEB)
Albertson, D.G.; Bowers, C.E.
1963-05-01
Optimization of the reactor process involves the knowledge of the longitudinal flux distribution on a semicontinuous, routine basis. The nonpoisonous spline was proposed as a way for obtaining flux traverses at any time during reactor operation, in virtually any location in the core. This report summarizes the findings of a feasibility study conducted in conjunction with PITA-18 and thus serves as a termination of the test phase of spline traversing.
THE BLOSSOM APPROACH TO THE DIMENSION OF THE BIVARIATE SPLINE SPACE
Institute of Scientific and Technical Information of China (English)
Yu-yu Feng; Zhi-bin Chen
2000-01-01
The dimension of the bivariate spline space S r n(Δ) may depend on geometric properties of triangulation Δ, in particular if n is not much bigger than r. In the paper, the blossom approach to the dimension count is outlined. It leads to the symbolic algorithm that gives the answer if a triangulation is singular or not. The approach is demonstrated on the case of Morgan-Scott partition and twice differentiable splines.
Removal of Baseline Wander Noise from Electrocardiogram (ECG) using Fifth-order Spline Interpolation
John A. OJO; Temilade B. ADETOYI; Solomon A. Adeniran
2016-01-01
Baseline wandering can mask some important features of the Electrocardiogram (ECG) signal hence it is desirable to remove this noise for proper analysis and display of the ECG signal. This paper presents the implementation and evaluation of spline interpolation and linear phase FIR filtering methods to remove this noise. Spline interpolation method requires the QRS waves to be first detected and fifth-order (quintic) interpolation technique applied to determine the smo...
Xiaolong Wang; Yi Wang; Zhizhu Cao; Weizhong Zou; Liping Wang; Guojun Yu; Bo Yu; Jinjun Zhang
2013-01-01
In general, proper orthogonal decomposition (POD) method is used to deal with single-parameter problems in engineering practice, and the linear interpolation is employed to establish the reduced model. Recently, this method is extended to solve the double-parameter problems with the amplitudes being achieved by cubic B-spline interpolation. In this paper, the accuracy of reduced models, which are established with linear interpolation and cubic B-spline interpolation, respectively, is verified...
Cubic B-Spline Collocation Method for One-Dimensional Heat and Advection-Diffusion Equations
Joan Goh; Ahmad Abd. Majid; Ahmad Izani Md. Ismail
2012-01-01
Numerical solutions of one-dimensional heat and advection-diffusion equations are obtained by collocation method based on cubic B-spline. Usual finite difference scheme is used for time and space integrations. Cubic B-spline is applied as interpolation function. The stability analysis of the scheme is examined by the Von Neumann approach. The efficiency of the method is illustrated by some test problems. The numerical results are found to be in good agreement with the exact solution.
GA Based Rational cubic B-Spline Representation for Still Image Interpolation
Samreen Abbas; Malik Zawwar Hussain; Misbah Irshad
2016-01-01
In this paper, an image interpolation scheme is designed for 2D natural images. A local support rational cubic spline with control parameters, as interpolatory function, is being optimized using Genetic Algorithm (GA). GA is applied to determine the appropriate values of control parameter used in the description of rational cubic spline. Three state-of-the-art Image Quality Assessment (IQA) models with traditional one are hired for comparison with existing image interpolation schemes and perc...
Numerical Solutions for Convection-Diffusion Equation through Non-Polynomial Spline
Directory of Open Access Journals (Sweden)
Ravi Kanth A.S.V.
2016-01-01
Full Text Available In this paper, numerical solutions for convection-diffusion equation via non-polynomial splines are studied. We purpose an implicit method based on non-polynomial spline functions for solving the convection-diffusion equation. The method is proven to be unconditionally stable by using Von Neumann technique. Numerical results are illustrated to demonstrate the efficiency and stability of the purposed method.
B-SPLINE-BASED SVM MODEL AND ITS APPLICATIONS TO OIL WATER-FLOODED STATUS IDENTIFICATION
Institute of Scientific and Technical Information of China (English)
Shang Fuhua; Zhao Tiejun; Yi Xiongying
2007-01-01
A method of B-spline transform for signal feature extraction is developed. With the B-spline,the log-signal space is mapped into the vector space. An efficient algorithm based on Support Vector Machine (SVM) to automatically identify the water-flooded status of oil-saturated stratum is described.The experiments show that this algorithm can improve the performances for the identification and the generalization in the case of a limited set of samples.
Quadrature-free spline method for two-dimensional Navier-Stokes equation
Institute of Scientific and Technical Information of China (English)
HU Xian-liang; HAN Dan-fu
2008-01-01
In this paper,a quadrature-free scheme of spline method for two-dimensional Navier-Stokes equation is derived,which can dramatically improve the efficiency of spline method for fluid problems proposed by Lai and Wenston(2004). Additionally,the explicit formulation for boundary condition with up to second order derivatives is presented. The numerical simulations on several benchmark problems show that the scheme is very efficient.
A Digital-Discrete Method For Smooth-Continuous Data Reconstruction
Chen, Li
2010-01-01
A systematic digital-discrete method for obtaining continuous functions with smoothness to a certain order (C^(n)) from sample data is designed. This method is based on gradually varied functions and the classical finite difference method. This new method has been applied to real groundwater data and the results have validated the method. This method is independent from existing popular methods such as the cubic spline method and the finite element method. The new digital-discrete method has considerable advantages for a large number of real data applications. This digital method also differs from other classical discrete methods that usually use triangulations. This method can potentially be used to obtain smooth functions such as polynomials through its derivatives f^(k) and the solution for partial differential equations such as harmonic and other important equations.
Bayesian Inference: with ecological applications
Link, William A.; Barker, Richard J.
2010-01-01
This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.
Bayesian Methods for Statistical Analysis
Puza, Borek
2015-01-01
Bayesian methods for statistical analysis is a book on statistical methods for analysing a wide variety of data. The book consists of 12 chapters, starting with basic concepts and covering numerous topics, including Bayesian estimation, decision theory, prediction, hypothesis testing, hierarchical models, Markov chain Monte Carlo methods, finite population inference, biased sampling and nonignorable nonresponse. The book contains many exercises, all with worked solutions, including complete c...
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...
On B-spline framelets derived from the unitary extension principle
Shen, Zuowei
2011-01-01
Spline wavelet tight frames of Ron-Shen have been used widely in frame based image analysis and restorations. However, except for the tight frame property and the approximation order of the truncated series, there are few other properties of this family of spline wavelet tight frames to be known. This paper is to present a few new properties of this family that will provide further understanding of it and, hopefully, give some indications why it is efficient in image analysis and restorations. In particular, we present a recurrence formula of computing generators of higher order spline wavelet tight frames from the lower order ones. We also represent each generator of spline wavelet tight frames as certain order of derivative of some univariate box spline. With this, we further show that each generator of sufficiently high order spline wavelet tight frames is close to a right order of derivative of a properly scaled Gaussian function. This leads to the result that the wavelet system generated by a finitely ma...
A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images
Directory of Open Access Journals (Sweden)
Xiaogang Du
2016-01-01
Full Text Available The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU.
Kiani, M A; Sim, K S; Nia, M E; Tso, C P
2015-05-01
A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time.
Smooth reference equations for slow vital capacity and flow-volume curve indexes.
Pistelli, F; Bottai, M; Viegi, G; Di Pede, F; Carrozzi, L; Baldacci, S; Pedreschi, M; Giuntini, C
2000-03-01
We derived reference values for slow vital capacity (VC) and flow-volume curve indexes (FVC, FEV(1), and flows) from the 1,185 tracings provided by 1,039 "normal" subjects who participated in one or both cross-sectional surveys of the Po River Delta study in 1980-1982 and in 1988-1991. Definition of "normal" was based on negative answers to questions on respiratory symptoms/diseases or recent infections, current/past tobacco smoking, and work exposure to noxious agents. Reference equations were derived separately by sex as linear regressions of body mass index (BMI = weight/height(2)), BMI-squared, height, height-squared, and age. Age entered all the models by natural cubic splines using two break points, except for the ratios FEV(1)/VC and FEV(1)/FVC. Random effects models were applied to adjust for the potential intrasubject correlation. BMI, along with height and age, appeared to be an important predictor, which was significantly associated with VC, FEV(1), FVC, FEV(1)/FVC, and PEF in both sexes, and with FEV(1)/VC and FEF(25-75) in females. Natural cubic splines provided smooth reference equation curves (no "jumps" or "angled points") over the entire age span, differently from the conventional reference equations. Thus, we recommend the use of smooth continuous equations for predicting lung function indexes, along with the inclusion of BMI in the equations.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Institute of Scientific and Technical Information of China (English)
吴宪祥; 郭宝龙; 王娟
2009-01-01
针对移动机器人路径规划问题,提出了一种基于粒了群三次样条优化的路径规划方法.借助三次样条连接描述路径,这样将路径规划问题转化为三次样条曲线的参数优化问题.借助粒了群优化算法快速收敛和全局寻优特性实现最优路径规划.实验结果表明:所提算法町以快速有效地实现障碍环境下机器人的无碰撞路径规划,规划路径平滑,利于机器人的运动控制.%A novel algorithm based on particle swarm optimization (PSO) of cubic splines is proposed for mobile robot path planning. The path is described by string of cubic splines, thus the path planning is equivalent to parameter optimization of particular cubic splines. PSO is introduced to get the optimal path for its fast convergence and global search character. Ex-perimental results show that a collision-avoidance path can be found fleetly and effectively among obstacles by the proposed algorithm. The planned path is smooth which is useful for robot motion control.
Dynamic Batch Bayesian Optimization
Azimi, Javad; Fern, Xiaoli
2011-01-01
Bayesian optimization (BO) algorithms try to optimize an unknown function that is expensive to evaluate using minimum number of evaluations/experiments. Most of the proposed algorithms in BO are sequential, where only one experiment is selected at each iteration. This method can be time inefficient when each experiment takes a long time and more than one experiment can be ran concurrently. On the other hand, requesting a fix-sized batch of experiments at each iteration causes performance inefficiency in BO compared to the sequential policies. In this paper, we present an algorithm that asks a batch of experiments at each time step t where the batch size p_t is dynamically determined in each step. Our algorithm is based on the observation that the sequence of experiments selected by the sequential policy can sometimes be almost independent from each other. Our algorithm identifies such scenarios and request those experiments at the same time without degrading the performance. We evaluate our proposed method us...
The Topological Effects of Smoothing.
Shafii, S; Dillard, S E; Hlawitschka, M; Hamann, B
2012-01-01
Scientific data sets generated by numerical simulations or experimental measurements often contain a substantial amount of noise. Smoothing the data removes noise but can have potentially drastic effects on the qualitative nature of the data, thereby influencing its characterization and visualization via topological analysis, for example. We propose a method to track topological changes throughout the smoothing process. As a preprocessing step, we oversmooth the data and collect a list of topological events, specifically the creation and destruction of extremal points. During rendering, it is possible to select the number of topological events by interactively manipulating a merging parameter. The result that a specific amount of smoothing has on the topology of the data is illustrated using a topology-derived transfer function that relates region connectivity of the smoothed data to the original regions of the unsmoothed data. This approach enables visual as well as quantitative analysis of the topological effects of smoothing.
Smoothness in Binomial Edge Ideals
Directory of Open Access Journals (Sweden)
Hamid Damadi
2016-06-01
Full Text Available In this paper we study some geometric properties of the algebraic set associated to the binomial edge ideal of a graph. We study the singularity and smoothness of the algebraic set associated to the binomial edge ideal of a graph. Some of these algebraic sets are irreducible and some of them are reducible. If every irreducible component of the algebraic set is smooth we call the graph an edge smooth graph, otherwise it is called an edge singular graph. We show that complete graphs are edge smooth and introduce two conditions such that the graph G is edge singular if and only if it satisfies these conditions. Then, it is shown that cycles and most of trees are edge singular. In addition, it is proved that complete bipartite graphs are edge smooth.
[Calculation of radioimmunochemical determinations by "spline approximation" (author's transl)].
Nolte, H; Mühlen, A; Hesch, R D; Pape, J; Warnecke, U; Jüppner, H
1976-06-01
A simplified method, based on the "spline approximation", is reported for the calculation of the standard curves of radioimmunochemical determinations. It is possible to manipulate the mathematical function with a pocket calculator, thus making it available for a large number of users. It was shown that, in contrast to the usual procedures, it is possible to achieve optimal quality control in the preparation of the standard curves and in the interpolation of unknown plasma samples. The recaluculation of interpolated values from their own standard curve revealed an error of 4.9% which would normally be an error of interpolation. The new method was compared with two established methods for 8 different radioimmunochemical determinations. The measured values of the standard curve showed a weighting, and there was a resulting quality control of these values, which, according to their statistical evalution, were more accurate than those of the others models (Ekins et al., Yalow et al., (1968), in: Radioisotopes in Medicine: in vitro studies (Hayes, R. L., Goswitz, F.A. & Murphy, B. E. P., eds) USA EC, Oak Ridge) and Rodbard et al. (1971), in: Competitive protein Binding Assys(Odell, W. D. & Danghedy, W. H., eds.) Lipincott, Philadelphia and Toronto). In contrast with these other models, the described method makes no mathematical or kinetic preconditions with respect to the dose-response relationship. To achieve optimal reaction conditions, experimentally determined reaction data are preferable to model theories.
Micropolar Fluids Using B-spline Divergence Conforming Spaces
Sarmiento, Adel
2014-06-06
We discretized the two-dimensional linear momentum, microrotation, energy and mass conservation equations from micropolar fluids theory, with the finite element method, creating divergence conforming spaces based on B-spline basis functions to obtain pointwise divergence free solutions [8]. Weak boundary conditions were imposed using Nitsche\\'s method for tangential conditions, while normal conditions were imposed strongly. Once the exact mass conservation was provided by the divergence free formulation, we focused on evaluating the differences between micropolar fluids and conventional fluids, to show the advantages of using the micropolar fluid model to capture the features of complex fluids. A square and an arc heat driven cavities were solved as test cases. A variation of the parameters of the model, along with the variation of Rayleigh number were performed for a better understanding of the system. The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.
Numerical solution of Poisson equation by quintic B-spline interpolation%均匀二型剖分下的二元五次B样条基函数及其应用
Institute of Scientific and Technical Information of China (English)
张胜刚; 宋明威; 王仁宏; 李国荣; 唐晓; 刘启贵
2012-01-01
1975年王仁宏建立了任意剖分下多元样条函数的基本理论框架,即所谓光滑余因子方法.多元样条在函数逼近、计算机辅助几何设计、有限元及小波等领域中均有重要的应用.由于某些特殊剖分如均匀剖分的可研究性,1984年王仁宏给出均匀二型剖分下的二元三次一阶光滑样条空间S1((△(2)mn))的维数及其B样条基函数,在计算机辅助几何设计,微分方程数值解等方面应用广泛.在研究光滑余因子方法的基础上,分析均匀二型剖分下的二元五次三阶光滑样条空间(S35)((△(2)mn))函数空间,给出了(S35)((△(2)mn))的维数及其B样条基函数,满足曲面拟合和微分方程数值解等应用中对更高阶光滑性的要求.基于该组基函数,提出一种Poisson方程的数值解方法,通过数值实例检验该方法的精度.%Multivariate splines have wide applications in approximation theory, computer aided geometric design(CAGD) and finite element method. In 1975, Ren-Hong Wang established a new approach to study the basic theory on multivariate spline functions on arbitrary partition by presenting the so called Smoothing cofactor-conformality method. As the large applications in CAGD et al. , Ren-Hong Wang discussed the dimension and B-spline basis of the C1 cubic spline spaces on type-2 triangulation partition, which is denoted by S31(△(2)mn). Accordingly we analyze the C3 quintic spline spaces on type-2 triangulation partition S53 (△(2)mn). The dimension and one group of B spline basis of S53(△(2)mn)are given. High derivatives is satisfied in applications. Based on the basis one numerical scheme is proposed to simulate the Poisson equation. Numerical examples are given to show the validity of the scheme.
Bayesian estimation of HIV-1 dynamics in vivo.
Ushakova, Anastasia; Pettersen, Frank Olav; Mæland, Arild; Lindqvist, Bo Henry; Kvale, Dag
2015-03-01
Statistical analysis of viral dynamics in HIV-1 infected patients undergoing structured treatment interruptions were performed using a novel model that accounts for treatment efficiency as well as total CD8+ T cell counts. A brief review of parameter estimates obtained in other studies is given, pointing to a considerable variation in the estimated values. A Bayesian approach to parameter estimation was used with longitudinal measurements of CD4+ and CD8+ T cell counts and HIV RNA. We describe an estimation procedure which uses spline approximations of CD8+ T cells dynamics. This approach reduces the number of parameters that must be estimated and is especially helpful when the CD8+ T cells growth function has a delayed dependence on the past. Seven important parameters related to HIV-1 in-host dynamics were estimated, most of them treated as global parameters across the group of patients. The estimated values were mainly in keeping with the estimates obtained in other reports, but our paper also introduces the estimates of some new parameters which supplement the current knowledge. The method was also tested on a simulated data set.
Application of Piecewise Cubic B-Spline%过两端点分段三次 B 样条方法应用研究*
Institute of Scientific and Technical Information of China (English)
王争争
2015-01-01
通过引入约束点 P0和常量 r，构建过两端点分段三次B样条曲线并推出衔接点光滑衔接条件。应用过两端点分段三次B样条方法可以构建直线、三角形、四边形及蛋形画法，并通过消齿光顺得到理想效果。实现图形的平移、缩放和旋转，通过逆时针、顺时针旋转计算消除偏差，保形效果理想。按顺时针方向生成闭曲线并记录轨迹点位置数据，方便平面上闭曲线对象间关系的计算，并得到布尔运算结果。应用该方法可以构建空间图形，实现颜色渐变效果理想。%By introducing the constraint point P0 and constant r ,two endpoints piecewise cubic B spline curve is built and some smooth cohesion terms are introduced .Application of two endpoints piecewise cubic B spline method can build straight lines ,triangles ,quadrilateral and egg painting .Through the elimination of tooth smoothing ,ideal effect is got . Translation ,scaling and rotation of graphics are achieved and eliminated by counterclockwise ,clockwise calculation devia‐tion ,conformal effect is ideal .Clockwise to generate closed curve trajectory point location and record data ,convenient plane closed curve calculation of relations between objects ,Boolean calculation results are obtained .The method can build space graphics ,make color gradient effect ideal .
Rolling Force and Rolling Moment in Spline Cold Rolling Using Slip-line Field Method
Institute of Scientific and Technical Information of China (English)
ZHANG Dawei; LI Yongtang; FU Jianhua; ZHENG Quangang
2009-01-01
Rolling force and rolling moment are prime process parameter of external spline cold rolling. However, the precise theoretical formulae of rolling force and rolling moment are still very fewer, and the determination of them depends on experience. In the present study, the mathematical models of rolling force and rolling moment are established based on stress field theory of slip-line. And the isotropic hardening is used to improve the yield criterion. Based on MATLAB program language environment, calculation program is developed according to mathematical models established. The rolling force and rolling moment could be predicted quickly via the calculation program, and then the reliability of the models is validated by FEM. Within the range of module of spline m=0.5-1.5 mm, pressure angle of reference circle α=30.0°-45.0°, and number of spline teeth Z=19-54, the rolling force and rolling moment in rolling process (finishing rolling is excluded) are researched by means of virtualizing orthogonal experiment design. The results of the present study indicate that:the influences of module and number of spline teeth on the maximum rolling force and rolling moment in the process are remarkable;in the case of pressure angle of reference circle is little, module of spline is great, and number of spline teeth is little, the peak value of rolling force in rolling process may appear in the midst of the process;the peak value of rolling moment in rolling process appears in the midst of the process, and then oscillator weaken to a stable value. The results of the present study may provide guidelines for the determination of power of the motor and the design of hydraulic system of special machine, and provide basis for the farther researches on the precise forming process of external spline cold rolling.
Csébfalvi, Balázs
2010-01-01
In this paper, we demonstrate that quasi-interpolation of orders two and four can be efficiently implemented on the Body-Centered Cubic (BCC) lattice by using tensor-product B-splines combined with appropriate discrete prefilters. Unlike the nonseparable box-spline reconstruction previously proposed for the BCC lattice, the prefiltered B-spline reconstruction can utilize the fast trilinear texture-fetching capability of the recent graphics cards. Therefore, it can be applied for rendering BCC-sampled volumetric data interactively. Furthermore, we show that a separable B-spline filter can suppress the postaliasing effect much more isotropically than a nonseparable box-spline filter of the same approximation power. Although prefilters that make the B-splines interpolating on the BCC lattice do not exist, we demonstrate that quasi-interpolating prefiltered linear and cubic B-spline reconstructions can still provide similar or higher image quality than the interpolating linear box-spline and prefiltered quintic box-spline reconstructions, respectively.
Bayesian seismic AVO inversion
Energy Technology Data Exchange (ETDEWEB)
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
直线网格B样条混合滤波GPU光线投射%Rectilinear Grid GPU Raycasting with B-Spline Hybrid Filtering
Institute of Scientific and Technical Information of China (English)
袁斌
2013-01-01
To render rectilinear grid quickly to produce high quality image, this paper gives B-spline hybrid filtering and implementation speed grid that are applied to GPU-raycasting for rectilinear grid. Proposition about sign property of derivative of B-spline basics is proved. Furthermore, it is shown that there are significant errors in some cases when S&H method is used to compute derivative of B-spline. According to such a conclusion, in ray integration, S&H method is used if the condition is satisfied; otherwise B-spline basic equation is used. Moreover, proposition about range of derivatives of B-spline functions is proved, so that gradient magnitude modulation and speed grid are implemented in GPU; in ray integration, interfaces between materials are shown with gradient magnitude modulation and empty space is skipped with speed grid. As a result, GPU-raycasting with the hybrid method in this paper frees artifacts caused by generalized S&H algorithm; it is faster than stationary step GPU-raycasting based on B-spline basic equation; it can represent the true feature of things measured or simulated if they are smooth.%为了快速、高质量地绘制直线网格,提出B样条混合滤波方法,实现加速网格,并将其应用到直线网格GPU光线投射.证明了三次B样条基函数导数的符号性质,进而证明用快速三次滤波方法(S& H方法)计算非均匀B样条函数的导数会出现误差.据此,在光线积分计算中,如果条件允许,采用S & H方法；否则采用基于B样条基本公式的滤波方法.另外,证明三次B样条函数导数的范围,以实现梯度量调制和加速网格；在光线积分计算中,利用梯度量调制表现物质的分界面；利用加速网格,跳过无效积分步,加快绘制速度.实验结果表明,采用混合滤波的直线网格GPU光线投射方法能消除S&H方法导致的走样现象；与基于B样条基本公式的绘制方法相比,该方法更快；如果模拟
Côrtes, A.M.A.
2015-02-20
The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf-sup stable and pointwise divergence-free. When applied to discretized Stokes equations, these spaces generate a symmetric and indefinite saddle-point linear system. Krylov subspace methods are usually the most efficient procedures to solve such systems. One of such methods, for symmetric systems, is the Minimum Residual Method (MINRES). However, the efficiency and robustness of Krylov subspace methods is closely tied to appropriate preconditioning strategies. For the discrete Stokes system, in particular, block-diagonal strategies provide efficient preconditioners. In this article, we compare the performance of block-diagonal preconditioners for several block choices. We verify how the eigenvalue clustering promoted by the preconditioning strategies affects MINRES convergence. We also compare the number of iterations and wall-clock timings. We conclude that among the building blocks we tested, the strategy with relaxed inner conjugate gradients preconditioned with incomplete Cholesky provided the best results.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Bayesian Parallel Imaging With Edge-Preserving Priors
Raj, Ashish; Singh, Gurmeet; Zabih, Ramin; Kressler, Bryan; Wang, Yi; Schuff, Norbert; Weiner, Michael
2007-01-01
Existing parallel MRI methods are limited by a fundamental trade-off in that suppressing noise introduces aliasing artifacts. Bayesian methods with an appropriately chosen image prior offer a promising alternative; however, previous methods with spatial priors assume that intensities vary smoothly over the entire image, resulting in blurred edges. Here we introduce an edge-preserving prior (EPP) that instead assumes that intensities are piecewise smooth, and propose a new approach to efficiently compute its Bayesian estimate. The estimation task is formulated as an optimization problem that requires a non-convex objective function to be minimized in a space with thousands of dimensions. As a result, traditional continuous minimization methods cannot be applied. This optimization task is closely related to some problems in the field of computer vision for which discrete optimization methods have been developed in the last few years. We adapt these algorithms, which are based on graph cuts, to address our optimization problem. The results of several parallel imaging experiments on brain and torso regions performed under challenging conditions with high acceleration factors are shown and compared with the results of conventional sensitivity encoding (SENSE) methods. An empirical analysis indicates that the proposed method visually improves overall quality compared to conventional methods. PMID:17195165
Smooth analysis in Banach spaces
Hájek, Petr
2014-01-01
This bookis aboutthe subject of higher smoothness in separable real Banach spaces.It brings together several angles of view on polynomials, both in finite and infinite setting.Also a rather thorough and systematic view of the more recent results, and the authors work is given. The book revolves around two main broad questions: What is the best smoothness of a given Banach space, and its structural consequences? How large is a supply of smooth functions in the sense of approximating continuous functions in the uniform topology, i.e. how does the Stone-Weierstrass theorem generalize into in
Directory of Open Access Journals (Sweden)
Scott W. Keith
2014-09-01
Full Text Available This paper details the design, evaluation, and implementation of a framework for detecting and modeling nonlinearity between a binary outcome and a continuous predictor variable adjusted for covariates in complex samples. The framework provides familiar-looking parameterizations of output in terms of linear slope coefficients and odds ratios. Estimation methods focus on maximum likelihood optimization of piecewise linear free-knot splines formulated as B-splines. Correctly specifying the optimal number and positions of the knots improves the model, but is marked by computational intensity and numerical instability. Our inference methods utilize both parametric and nonparametric bootstrapping. Unlike other nonlinear modeling packages, this framework is designed to incorporate multistage survey sample designs common to nationally representative datasets. We illustrate the approach and evaluate its performance in specifying the correct number of knots under various conditions with an example using body mass index (BMI; kg/m2 and the complex multi-stage sampling design from the Third National Health and Nutrition Examination Survey to simulate binary mortality outcomes data having realistic nonlinear sample-weighted risk associations with BMI. BMI and mortality data provide a particularly apt example and area of application since BMI is commonly recorded in large health surveys with complex designs, often categorized for modeling, and nonlinearly related to mortality. When complex sample design considerations were ignored, our method was generally similar to or more accurate than two common model selection procedures, Schwarz’s Bayesian Information Criterion (BIC and Akaike’s Information Criterion (AIC, in terms of correctly selecting the correct number of knots. Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these conditions.
Bayesian Methods and Universal Darwinism
Campbell, John
2010-01-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...
Attention in a bayesian framework
DEFF Research Database (Denmark)
Whiteley, Louise Emma; Sahani, Maneesh
2012-01-01
The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models...... of perception, and use this observation to frame a new computational account of the need for, and action of, attention - unifying diverse attentional phenomena in a way that goes beyond previous inferential, probabilistic and Bayesian models. Attentional effects are most evident in cluttered environments......, and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...
Probability biases as Bayesian inference
Directory of Open Access Journals (Sweden)
Andre; C. R. Martins
2006-11-01
Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.
Virtual Vector Machine for Bayesian Online Classification
Minka, Thomas P; Yuan,; Qi,
2012-01-01
In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information contained in the preceding data stream by a Gaussian distribution over the classi cation weights plus a constant number of virtual data points. The virtual data points are designed to add extra non-Gaussian information about the classi cation weights. To maintain the constant number of virtual points, the virtual vector machine adds the current real data point into the virtual point set, merges two most similar virtual points into a new virtual point or deletes a virtual point that is far from the decision boundary. The info...
Dominant point detecting based non-uniform B-spline approximation for grain contour
Institute of Scientific and Technical Information of China (English)
ZHAO XiuYang; YIN YanSheng; YANG Bo
2007-01-01
Three-dimension reconstruction from serial sections has been used in the last decade to obtain information concerning three-dimensional microstructural geometry. One of the crucial steps of three-dimension reconstruction is getting compact and fairing grain contours. Based on the achievement of closed raw contours of ceramic composite grains by using wavelet and level set, an adaptive method is adopted for the polygonal approximation of the digitized raw contours. Instead of setting a fixed length of support region in advance, the novel method computes the suitable length of support region for each point to find the best estimated curvature. The dominant points are identified as the points with local maximum estimated curvatures. Periodic closed B-spline approximation is used to find the most compact B-spline grain boundary contours within the given tolerance. A flexible distance selection approach is adopted to obtain the common knot vector of serial contours consisting of less knots that contain enough degrees of freedom to guarantee the existence of a B-spline curve interpolating each contour. Finally, a B-spline surface interpolating the serial contours is generated via B-spline surface skinning.
Dominant point detecting based non-uniform B-spline approximation for grain contour
Institute of Scientific and Technical Information of China (English)
2007-01-01
Three-dimension reconstruction from serial sections has been used in the last decade to obtain information concerning three-dimensional microstructural ge-ometry. One of the crucial steps of three-dimension reconstruction is getting compact and fairing grain contours. Based on the achievement of closed raw con-tours of ceramic composite grains by using wavelet and level set, an adaptive method is adopted for the polygonal approximation of the digitized raw contours. Instead of setting a fixed length of support region in advance, the novel method computes the suitable length of support region for each point to find the best es-timated curvature. The dominant points are identified as the points with local maximum estimated curvatures. Periodic closed B-spline approximation is used to find the most compact B-spline grain boundary contours within the given tolerance. A flexible distance selection approach is adopted to obtain the common knot vector of serial contours consisting of less knots that contain enough degrees of freedom to guarantee the existence of a B-spline curve interpolating each contour. Finally, a B-spline surface interpolating the serial contours is generated via B-spline surface skinning.
Approximating Spline filter: New Approach for Gaussian Filtering in Surface Metrology
Directory of Open Access Journals (Sweden)
Hao Zhang
2009-10-01
Full Text Available This paper presents a new spline filter named approximating spline filter for surface metrology. The purpose is to provide a new approach of Gaussian filter and evaluate the characteristics of an engineering surface more accurately and comprehensively. First, the configuration of approximating spline filter is investigated, which describes that this filter inherits all the merits of an ordinary spline filter e.g. no phase distortion and no end distortion. Then, the approximating coefficient selection is discussed, which specifies an important property of this filter-the convergence to Gaussian filter. The maximum approximation deviation between them can be controlled below 4.36% , moreover, be decreased to less than 1% when cascaded. Since extended to 2 dimensional (2D filter, the transmission deviation yields within -0.63% : +1.48% . It is proved that the approximating spline filter not only achieves the transmission characteristic of Gaussian filter, but also alleviates the end effect on a data sequence. The whole computational procedure is illustrated and applied to a work piece to acquire mean line whereas a simulated surface to mean surface. These experimental results indicate that this filtering algorithm for 11200 profile points and 2000 × 2000 form data, only spends 8ms and 2.3s respectively.
Directory of Open Access Journals (Sweden)
Neng Wan
2014-01-01
Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.
Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Keller, J.; Errichello, R.; Halse, C.
2013-12-01
Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.
Directory of Open Access Journals (Sweden)
Bush William S
2009-12-01
Full Text Available Abstract Background Gene-centric analysis tools for genome-wide association study data are being developed both to annotate single locus statistics and to prioritize or group single nucleotide polymorphisms (SNPs prior to analysis. These approaches require knowledge about the relationships between SNPs on a genotyping platform and genes in the human genome. SNPs in the genome can represent broader genomic regions via linkage disequilibrium (LD, and population-specific patterns of LD can be exploited to generate a data-driven map of SNPs to genes. Methods In this study, we implemented LD-Spline, a database routine that defines the genomic boundaries a particular SNP represents using linkage disequilibrium statistics from the International HapMap Project. We compared the LD-Spline haplotype block partitioning approach to that of the four gamete rule and the Gabriel et al. approach using simulated data; in addition, we processed two commonly used genome-wide association study platforms. Results We illustrate that LD-Spline performs comparably to the four-gamete rule and the Gabriel et al. approach; however as a SNP-centric approach LD-Spline has the added benefit of systematically identifying a genomic boundary for each SNP, where the global block partitioning approaches may falter due to sampling variation in LD statistics. Conclusion LD-Spline is an integrated database routine that quickly and effectively defines the genomic region marked by a SNP using linkage disequilibrium, with a SNP-centric block definition algorithm.
Bayesian Missile System Reliability from Point Estimates
2014-10-28
OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Bayesian Missile System Reliability from Point Estimates 5a. CONTRACT...Principle (MEP) to convert point estimates to probability distributions to be used as priors for Bayesian reliability analysis of missile data, and...illustrate this approach by applying the priors to a Bayesian reliability model of a missile system. 15. SUBJECT TERMS priors, Bayesian , missile
SMOOTHING BY CONVEX QUADRATIC PROGRAMMING
Institute of Scientific and Technical Information of China (English)
Bing-sheng He; Yu-mei Wang
2005-01-01
In this paper, we study the relaxed smoothing problems with general closed convex constraints. It is pointed out that such problems can be converted to a convex quadratic minimization problem for which there are good programs in software libraries.
Perception, illusions and Bayesian inference.
Nour, Matthew M; Nour, Joseph M
2015-01-01
Descriptive psychopathology makes a distinction between veridical perception and illusory perception. In both cases a perception is tied to a sensory stimulus, but in illusions the perception is of a false object. This article re-examines this distinction in light of new work in theoretical and computational neurobiology, which views all perception as a form of Bayesian statistical inference that combines sensory signals with prior expectations. Bayesian perceptual inference can solve the 'inverse optics' problem of veridical perception and provides a biologically plausible account of a number of illusory phenomena, suggesting that veridical and illusory perceptions are generated by precisely the same inferential mechanisms.
Bayesian test and Kuhn's paradigm
Institute of Scientific and Technical Information of China (English)
Chen Xiaoping
2006-01-01
Kuhn's theory of paradigm reveals a pattern of scientific progress,in which normal science alternates with scientific revolution.But Kuhn underrated too much the function of scientific test in his pattern,because he focuses all his attention on the hypothetico-deductive schema instead of Bayesian schema.This paper employs Bayesian schema to re-examine Kuhn's theory of paradigm,to uncover its logical and rational components,and to illustrate the tensional structure of logic and belief,rationality and irrationality,in the process of scientific revolution.
3D Bayesian contextual classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
2000-01-01
We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....
Wetting on smooth micropatterned defects
Debuisson, Damien; Dufour, Renaud; Senez, Vincent; Arscott, Steve
2011-01-01
We develop a model which predicts the contact angle hysteresis introduced by smooth micropatterned defects. The defects are modeled by a smooth function and the contact angle hysteresis is explained using a tangent line solution. When the liquid micro-meniscus touches both sides of the defect simultaneously, depinning of the contact line occurs. The defects are fabricated using a photoresist and experimental results confirm the model. An important point is that the model is scale-independent,...
A Bayesian Nonparametric Approach to Test Equating
Karabatsos, George; Walker, Stephen G.
2009-01-01
A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
Bayesian networks and food security - An introduction
Stein, A.
2004-01-01
This paper gives an introduction to Bayesian networks. Networks are defined and put into a Bayesian context. Directed acyclical graphs play a crucial role here. Two simple examples from food security are addressed. Possible uses of Bayesian networks for implementation and further use in decision sup
Plug & Play object oriented Bayesian networks
DEFF Research Database (Denmark)
Bangsø, Olav; Flores, J.; Jensen, Finn Verner
2003-01-01
Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have b...
Exotic smoothness and quantum gravity
Energy Technology Data Exchange (ETDEWEB)
Asselmeyer-Maluga, T, E-mail: torsten.asselmeyer-maluga@dlr.d [German Aerospace Center, Berlin, Germany and Loyola University, New Orleans, LA (United States)
2010-08-21
Since the first work on exotic smoothness in physics, it was folklore to assume a direct influence of exotic smoothness to quantum gravity. Thus, the negative result of Duston (2009 arXiv:0911.4068) was a surprise. A closer look into the semi-classical approach uncovered the implicit assumption of a close connection between geometry and smoothness structure. But both structures, geometry and smoothness, are independent of each other. In this paper we calculate the 'smoothness structure' part of the path integral in quantum gravity assuming that the 'sum over geometries' is already given. For that purpose we use the knot surgery of Fintushel and Stern applied to the class E(n) of elliptic surfaces. We mainly focus our attention to the K3 surfaces E(2). Then we assume that every exotic smoothness structure of the K3 surface can be generated by knot or link surgery in the manner of Fintushel and Stern. The results are applied to the calculation of expectation values. Here we discuss the two observables, volume and Wilson loop, for the construction of an exotic 4-manifold using the knot 5{sub 2} and the Whitehead link Wh. By using Mostow rigidity, we obtain a topological contribution to the expectation value of the volume. Furthermore, we obtain a justification of area quantization.
Exotic Smoothness and Quantum Gravity
Asselmeyer-Maluga, Torsten
2010-01-01
Since the first work on exotic smoothness in physics, it was folklore to assume a direct influence of exotic smoothness to quantum gravity. Thus, the negative result of Duston (arXiv:0911.4068) was a surprise. A closer look into the semi-classical approach uncovered the implicit assumption of a close connection between geometry and smoothness structure. But both structures, geometry and smoothness, are independent of each other. In this paper we calculate the "smoothness structure" part of the path integral in quantum gravity assuming that the "sum over geometries" is already given. For that purpose we use the knot surgery of Fintushel and Stern applied to the class E(n) of elliptic surfaces. We mainly focus our attention to the K3 surfaces E(2). Then we assume that every exotic smoothness structure of the K3 surface can be generated by knot or link surgery a la Fintushel and Stern. The results are applied to the calculation of expectation values. Here we discuss the two observables, volume and Wilson loop, f...
Poor-data and data-poor species stock assessment using a Bayesian hierarchical approach.
Jiao, Yan; Cortés, Enric; Andrews, Kate; Guo, Feng
2011-10-01
Appropriate inference for stocks or species with low-quality data (poor data) or limited data (data poor) is extremely important. Hierarchical Bayesian methods are especially applicable to small-area, small-sample-size estimation problems because they allow poor-data species to borrow strength from species with good-quality data. We used a hammerhead shark complex as an example to investigate the advantages of using hierarchical Bayesian models in assessing the status of poor-data and data-poor exploited species. The hammerhead shark complex (Sphyrna spp.) along the Atlantic and Gulf of Mexico coasts of the United States is composed of three species: the scalloped hammerhead (S. lewini), the great hammerhead (S. mokarran), and the smooth hammerhead (S. zygaena) sharks. The scalloped hammerhead comprises 70-80% of the catch and has catch and relative abundance data of good quality, whereas great and smooth hammerheads have relative abundance indices that are both limited and of low quality presumably because of low stock density and limited sampling. Four hierarchical Bayesian state-space surplus production models were developed to simulate variability in population growth rates, carrying capacity, and catchability of the species. The results from the hierarchical Bayesian models were considerably more robust than those of the nonhierarchical models. The hierarchical Bayesian approach represents an intermediate strategy between traditional models that assume different population parameters for each species and those that assume all species share identical parameters. Use of the hierarchical Bayesian approach is suggested for future hammerhead shark stock assessments and for modeling fish complexes with species-specific data, because the poor-data species can borrow strength from the species with good data, making the estimation more stable and robust.
Smooth quantum gravity: Exotic smoothness and Quantum gravity
Asselmeyer-Maluga, Torsten
2016-01-01
Over the last two decades, many unexpected relations between exotic smoothness, e.g. exotic $\\mathbb{R}^{4}$, and quantum field theory were found. Some of these relations are rooted in a relation to superstring theory and quantum gravity. Therefore one would expect that exotic smoothness is directly related to the quantization of general relativity. In this article we will support this conjecture and develop a new approach to quantum gravity called \\emph{smooth quantum gravity} by using smooth 4-manifolds with an exotic smoothness structure. In particular we discuss the appearance of a wildly embedded 3-manifold which we identify with a quantum state. Furthermore, we analyze this quantum state by using foliation theory and relate it to an element in an operator algebra. Then we describe a set of geometric, non-commutative operators, the skein algebra, which can be used to determine the geometry of a 3-manifold. This operator algebra can be understood as a deformation quantization of the classical Poisson alge...
Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark
2013-01-01
Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.
Research on Quadratic Spline Interpolation%二次样条插值研究
Institute of Scientific and Technical Information of China (English)
刘为; 高毅; 高尚
2011-01-01
The spline technology has applications widely in CAD, CAM, and computer graphics systems. The qualification of quadratic spline interpolation is discussed firstly. The solutions of quadratic spline interpolation on the 5 boundary conditions are given. At last, computation methods are illustrated by examples.%样条技术在计算机辅助设计,计算机辅助制造,和计算机图形系统得到了广泛应用.分析了二次样条函数插值的条件,分5种边值条件给出了二次样条插值的求解方法,最后给出实例验证求解方法.
Susanti, D.; Hartini, E.; Permana, A.
2017-01-01
Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.
Short-Term Wind Speed Forecast Based on B-Spline Neural Network Optimized by PSO
Directory of Open Access Journals (Sweden)
Zhongqiang Wu
2015-01-01
Full Text Available Considering the randomness and volatility of wind, a method based on B-spline neural network optimized by particle swarm optimization is proposed to predict the short-term wind speed. The B-spline neural network can change the division of input space and the definition of basis function flexibly. For any input, only a few outputs of hidden layers are nonzero, the outputs are simple, and the convergence speed is fast, but it is easy to fall into local minimum. The traditional method to divide the input space is thoughtless and it will influence the final prediction accuracy. Particle swarm optimization is adopted to solve the problem by optimizing the nodes. Simulated results show that it has higher prediction accuracy than traditional B-spline neural network and BP neural network.
Evaluation of solid–liquid interface profile during continuous casting by a spline based formalism
Indian Academy of Sciences (India)
S K Das
2001-08-01
A numerical framework has been applied which comprises of a cubic spline based collocation method to determine the solid–liquid interface profile (solidification front) during continuous casting process. The basis function chosen for the collocation algorithm to be employed in this formalism, is a cubic spline interpolation function. An iterative solution methodology has been developed to track the interface profile for copper strand of rectangular transverse section for different casting speeds. It is based on enthalpy conservation criteria at the solidification interface and the trend is found to be in good agreement with the available information in the literature although a point to point mapping of the profile is not practically realizable. The spline based collocation algorithm is found to be a reasonably efficient tool for solidification front tracking process, as a good spatial derivative approximation can be achieved incorporating simple modelling philosophy which is numerically robust and computationally cost effective.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Institute of Scientific and Technical Information of China (English)
孙孝前; 尤进红
2003-01-01
In this paper we consider the estimating problem of a semiparametric regression modelling whenthe data are longitudinal. An iterative weighted partial spline least squares estimator (IWPSLSE) for the para-metric component is proposed which is more efficient than the weighted partial spline least squares estimator(WPSLSE) with weights constructed by using the within-group partial spline least squares residuals in the senseof asymptotic variance. The asymptotic normality of this IWPSLSE is established. An adaptive procedure ispresented which ensures that the iterative process stops after a finite number of iterations and produces anestimator asymptotically equivalent to the best estimator that can be obtained by using the iterative proce-dure. These results are generalizations of those in heteroscedastic linear model to the case of semiparametric regression.
Directory of Open Access Journals (Sweden)
Xiaolong Wang
2013-01-01
Full Text Available In general, proper orthogonal decomposition (POD method is used to deal with single-parameter problems in engineering practice, and the linear interpolation is employed to establish the reduced model. Recently, this method is extended to solve the double-parameter problems with the amplitudes being achieved by cubic B-spline interpolation. In this paper, the accuracy of reduced models, which are established with linear interpolation and cubic B-spline interpolation, respectively, is verified via two typical examples. Both results of the two methods are satisfying, and the results of cubic B-spline interpolation are more accurate than those of linear interpolation. The results are meaningful for guiding the application of the POD interpolation to complex multiparameter problems.
B-Spline Finite Elements and their Efficiency in Solving Relativistic Mean Field Equations
Pöschl, W
1997-01-01
A finite element method using B-splines is presented and compared with a conventional finite element method of Lagrangian type. The efficiency of both methods has been investigated at the example of a coupled non-linear system of Dirac eigenvalue equations and inhomogeneous Klein-Gordon equations which describe a nuclear system in the framework of relativistic mean field theory. Although, FEM has been applied with great success in nuclear RMF recently, a well known problem is the appearance of spurious solutions in the spectra of the Dirac equation. The question, whether B-splines lead to a reduction of spurious solutions is analyzed. Numerical expenses, precision and behavior of convergence are compared for both methods in view of their use in large scale computation on FEM grids with more dimensions. A B-spline version of the object oriented C++ code for spherical nuclei has been used for this investigation.
Bayesian stable isotope mixing models
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...
Naive Bayesian for Email Filtering
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
The paper presents a method of email filter based on Naive Bayesian theory that can effectively filter junk mail and illegal mail. Furthermore, the keys of implementation are discussed in detail. The filtering model is obtained from training set of email. The filtering can be done without the users specification of filtering rules.
Bayesian analysis of binary sequences
Torney, David C.
2005-03-01
This manuscript details Bayesian methodology for "learning by example", with binary n-sequences encoding the objects under consideration. Priors prove influential; conformable priors are described. Laplace approximation of Bayes integrals yields posterior likelihoods for all n-sequences. This involves the optimization of a definite function over a convex domain--efficiently effectuated by the sequential application of the quadratic program.
Bayesian NL interpretation and learning
Zeevat, H.
2011-01-01
Everyday natural language communication is normally successful, even though contemporary computational linguistics has shown that NL is characterised by very high degree of ambiguity and the results of stochastic methods are not good enough to explain the high success rate. Bayesian natural language
ANALYSIS OF BAYESIAN CLASSIFIER ACCURACY
Directory of Open Access Journals (Sweden)
Felipe Schneider Costa
2013-01-01
Full Text Available The naÃ¯ve Bayes classifier is considered one of the most effective classification algorithms today, competing with more modern and sophisticated classifiers. Despite being based on unrealistic (naÃ¯ve assumption that all variables are independent, given the output class, the classifier provides proper results. However, depending on the scenario utilized (network structure, number of samples or training cases, number of variables, the network may not provide appropriate results. This study uses a process variable selection, using the chi-squared test to verify the existence of dependence between variables in the data model in order to identify the reasons which prevent a Bayesian network to provide good performance. A detailed analysis of the data is also proposed, unlike other existing work, as well as adjustments in case of limit values between two adjacent classes. Furthermore, variable weights are used in the calculation of a posteriori probabilities, calculated with mutual information function. Tests were applied in both a naÃ¯ve Bayesian network and a hierarchical Bayesian network. After testing, a significant reduction in error rate has been observed. The naÃ¯ve Bayesian network presented a drop in error rates from twenty five percent to five percent, considering the initial results of the classification process. In the hierarchical network, there was not only a drop in fifteen percent error rate, but also the final result came to zero.
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...
Bayesian Classification of Image Structures
DEFF Research Database (Denmark)
Goswami, Dibyendu; Kalkan, Sinan; Krüger, Norbert
2009-01-01
In this paper, we describe work on Bayesian classi ers for distinguishing between homogeneous structures, textures, edges and junctions. We build semi-local classiers from hand-labeled images to distinguish between these four different kinds of structures based on the concept of intrinsic dimensi...
3-D contextual Bayesian classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
In this paper we will consider extensions of a series of Bayesian 2-D contextual classification pocedures proposed by Owen (1984) Hjort & Mohn (1984) and Welch & Salter (1971) and Haslett (1985) to 3 spatial dimensions. It is evident that compared to classical pixelwise classification further...
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed...
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis Linda
2006-01-01
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...
Bayesian Evidence and Model Selection
Knuth, Kevin H; Malakar, Nabin K; Mubeen, Asim M; Placek, Ben
2014-01-01
In this paper we review the concept of the Bayesian evidence and its application to model selection. The theory is presented along with a discussion of analytic, approximate and numerical techniques. Application to several practical examples within the context of signal processing are discussed.
Differentiated Bayesian Conjoint Choice Designs
Z. Sándor (Zsolt); M. Wedel (Michel)
2003-01-01
textabstractPrevious conjoint choice design construction procedures have produced a single design that is administered to all subjects. This paper proposes to construct a limited set of different designs. The designs are constructed in a Bayesian fashion, taking into account prior uncertainty about
B-Spline with Symplectic Algorithm Method for Solution of Time-Dependent Schr(o)dinger Equations
Institute of Scientific and Technical Information of China (English)
BIAN Xue-Bin; QIAO Hao-Xue; SHI Ting-Yun
2006-01-01
@@ A B-spline with the symplectic algorithm method for the solution of time-dependent Schr(o)dinger equations(TDSEs) is introduced. The spatial part of the wavefunction is expanded by B-spline and the time evolution is given in a symplectic scheme.
Bayesian Alternation During Tactile Augmentation
Directory of Open Access Journals (Sweden)
Caspar Mathias Goeke
2016-10-01
Full Text Available A large number of studies suggest that the integration of multisensory signals by humans is well described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition, rotation only (native condition, and both augmented and native information (bimodal condition. Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants’ responses with a probit model and calculated the just notable difference (JND. Then we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67 than the Bayesian integration model (χred2= 4.34. Slightly higher accuracy showed a non-Bayesian winner takes all model (χred2= 1.64, which either used only native or only augmented values per subject for prediction. However the performance of the Bayesian alternation model could be substantially improved (χred2= 1.09 utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in
Smooth Wilson loops in N=4 non-chiral superspace
Beisert, Niklas; Müller, Dennis; Plefka, Jan; Vergu, Cristian
2015-12-01
We consider a supersymmetric Wilson loop operator for 4d N = 4 super Yang-Mills theory which is the natural object dual to the AdS 5 × S 5 superstring in the AdS/CFT correspondence. It generalizes the traditional bosonic 1 /2 BPS Maldacena-Wilson loop operator and completes recent constructions in the literature to smooth (non-light-like) loops in the full N=4 non-chiral superspace. This Wilson loop operator enjoys global super-conformal and local kappa-symmetry of which a detailed discussion is given. Moreover, the finiteness of its vacuum expectation value is proven at leading order in perturbation theory. We determine the leading vacuum expectation value for general paths both at the component field level up to quartic order in anti-commuting coordinates and in the full non-chiral superspace in suitable gauges. Finally, we discuss loops built from quadric splines joined in such a way that the path derivatives are continuous at the intersection.
Smooth Wilson Loops in N=4 Non-Chiral Superspace
Beisert, Niklas; Plefka, Jan; Vergu, Cristian
2015-01-01
We consider a supersymmetric Wilson loop operator for 4d N=4 super Yang-Mills theory which is the natural object dual to the AdS_5 x S^5 superstring in the AdS/CFT correspondence. It generalizes the traditional bosonic 1/2 BPS Maldacena-Wilson loop operator and completes recent constructions in the literature to smooth (non-light-like) loops in the full N=4 non-chiral superspace. This Wilson loop operator enjoys global superconformal and local kappa-symmetry of which a detailed discussion is given. Moreover, the finiteness of its vacuum expectation value is proven at leading order in perturbation theory. We determine the leading vacuum expectation value for general paths both at the component field level up to quartic order in anti-commuting coordinates and in the full non-chiral superspace in suitable gauges. Finally, we discuss loops built from quadric splines joined in such a way that the path derivatives are continuous at the intersection.
BSR: B-spline atomic R-matrix codes
Zatsarinny, Oleg
2006-02-01
BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput
Splines and the Galerkin method for solving the integral equations of scattering theory
Brannigan, M.; Eyre, D.
1983-06-01
This paper investigates the Galerkin method with cubic B-spline approximants to solve singular integral equations that arise in scattering theory. We stress the relationship between the Galerkin and collocation methods.The error bound for cubic spline approximates has a convergence rate of O(h4), where h is the mesh spacing. We test the utility of the Galerkin method by solving both two- and three-body problems. We demonstrate, by solving the Amado-Lovelace equation for a system of three identical bosons, that our numerical treatment of the scattering problem is both efficient and accurate for small linear systems.
Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation
Directory of Open Access Journals (Sweden)
Ahmad Bilfarsah
2005-04-01
Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.
Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations
Energy Technology Data Exchange (ETDEWEB)
Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)
1996-12-31
In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).
B3 Spline Function Method Used in Simulating Flatness and Profile of Cold Rolled Strip
Institute of Scientific and Technical Information of China (English)
LI Jun-hong; QI Xiang-dong; LIAN Jia-chuang
2004-01-01
Flatness and profile are important quality indexes of strip. Combining the influence function method to solve the elastic deformation of roll system with the variational method to solve the lateral flow of metal, the flatness and profile of the strip during cold continuous rolling were simulated. The B3 spline function was used to analogize the lateral distribution of strip thickness. The transverse distributions of the exit thickness and the front tension stress for each pass were obtained. Compared with the measured results, it is proved that using the spline function to analogize the lateral distribution of strip thickness can improve the calculation accuracy of flatness and profile largely.
Rational trigonometric cubic spline to conserve convexity of 2D data
Directory of Open Access Journals (Sweden)
Farheen Ibraheem
2013-11-01
Full Text Available Researchers in different fields of study are always in dire need of spline interpolating function that conserve intrinsic trend of the data. In this paper, a rational trigonometric cubic spline with four free parameters has been used to retain convexity of 2D data. For this purpose, constraints on two of free parameters βi and γi in the description of the rational trigonometric function are derived while the remaining two αi and δi are set free. Numerical examples demonstrate that resulting curves using the technique of the underlying paper are C1.
About a family of C2 splines with one free generating function
Directory of Open Access Journals (Sweden)
Igor Verlan
2005-01-01
Full Text Available The problem of interpolation of discrete set of data on the interval [a, b] representing the function f is investigated. A family of C*C splines with one free generating function is introduced in order to solve this problem. Cubic C*C splines belong to this family. The required conditions which must satisfy the generating function in order to obtain explicit interpolants are presented and examples of generating functions are given. Mathematics Subject Classification: 2000: 65D05, 65D07, 41A05, 41A15.
Cubic B-Spline Collocation Method for One-Dimensional Heat and Advection-Diffusion Equations
Directory of Open Access Journals (Sweden)
Joan Goh
2012-01-01
Full Text Available Numerical solutions of one-dimensional heat and advection-diffusion equations are obtained by collocation method based on cubic B-spline. Usual finite difference scheme is used for time and space integrations. Cubic B-spline is applied as interpolation function. The stability analysis of the scheme is examined by the Von Neumann approach. The efficiency of the method is illustrated by some test problems. The numerical results are found to be in good agreement with the exact solution.
GA Based Rational cubic B-Spline Representation for Still Image Interpolation
Directory of Open Access Journals (Sweden)
Samreen Abbas
2016-12-01
Full Text Available In this paper, an image interpolation scheme is designed for 2D natural images. A local support rational cubic spline with control parameters, as interpolatory function, is being optimized using Genetic Algorithm (GA. GA is applied to determine the appropriate values of control parameter used in the description of rational cubic spline. Three state-of-the-art Image Quality Assessment (IQA models with traditional one are hired for comparison with existing image interpolation schemes and perceptual quality check of resulting images. The results show that the proposed scheme is better than the existing ones in comparison.
Study of Quintic Spline Interpolation and Generated Velocity Profile for High Speed Machining
Institute of Scientific and Technical Information of China (English)
ZHENG Jinxing; ZHANG Mingjun; MENG Qingxin
2006-01-01
Modern high speed machining (HSM) machine tools often operates at high speed and high feedrate with high accelerations, in order to deliver the rapid feed motion. This paper presents an interpolation algorithm to generate continuous quintic spline toolpaths, with a constant travel increment at each step, while the smoother accelerations and jerks of two-order curve are obtained. Then an approach for reducing the feedrate fluctuation in high speed spline interpolation is presented. The presented approach has been validated to quickly, reliably and effective with the simulation.
Dynamic coefficients of axial spline couplings in high-speed rotating machinery
Energy Technology Data Exchange (ETDEWEB)
Ku, C.P.R.; Walton, J.F. Jr. (Mechanical Technology Inc., Latham, NY (United States)); Lund, J.W. (Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Machine Elements)
1994-07-01
This paper provided the first opportunity to quantify the angular stiffness and equivalent viscous damping coefficients of an axial spline coupling used in high-speed turbomachinery. The bending moments and angular deflections transmitted across an axial spline coupling were measured while a nonrotating shaft was excited by an external shaker. A rotordynamics computer program was used to simulate the test conditions and to correlate the angular stiffness and damping coefficients. The effects of external force and frequency were also investigated. The angular stiffness and damping coefficients were used to perform a linear steady-state rotordynamics stability analysis, and the unstable natural frequency was calculated and compared to the experimental measurements.
Bayesian analysis of rare events
Energy Technology Data Exchange (ETDEWEB)
Straub, Daniel, E-mail: straub@tum.de; Papaioannou, Iason; Betz, Wolfgang
2016-06-01
In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.
Michel, Volker
2013-01-01
Lectures on Constructive Approximation: Fourier, Spline, and Wavelet Methods on the Real Line, the Sphere, and the Ball focuses on spherical problems as they occur in the geosciences and medical imaging. It comprises the author’s lectures on classical approximation methods based on orthogonal polynomials and selected modern tools such as splines and wavelets. Methods for approximating functions on the real line are treated first, as they provide the foundations for the methods on the sphere and the ball and are useful for the analysis of time-dependent (spherical) problems. The author then examines the transfer of these spherical methods to problems on the ball, such as the modeling of the Earth’s or the brain’s interior. Specific topics covered include: * the advantages and disadvantages of Fourier, spline, and wavelet methods * theory and numerics of orthogonal polynomials on intervals, spheres, and balls * cubic splines and splines based on reproducing kernels * multiresolution analysis using wavelet...
STATISTICAL BAYESIAN ANALYSIS OF EXPERIMENTAL DATA.
Directory of Open Access Journals (Sweden)
AHLAM LABDAOUI
2012-12-01
Full Text Available The Bayesian researcher should know the basic ideas underlying Bayesian methodology and the computational tools used in modern Bayesian econometrics. Some of the most important methods of posterior simulation are Monte Carlo integration, importance sampling, Gibbs sampling and the Metropolis- Hastings algorithm. The Bayesian should also be able to put the theory and computational tools together in the context of substantive empirical problems. We focus primarily on recent developments in Bayesian computation. Then we focus on particular models. Inevitably, we combine theory and computation in the context of particular models. Although we have tried to be reasonably complete in terms of covering the basic ideas of Bayesian theory and the computational tools most commonly used by the Bayesian, there is no way we can cover all the classes of models used in econometrics. We propose to the user of analysis of variance and linear regression model.
Bayesian methods for measures of agreement
Broemeling, Lyle D
2009-01-01
Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...
Smoothing and projecting age-specific probabilities of death by TOPALS
Directory of Open Access Journals (Sweden)
Joop de Beer
2012-10-01
Full Text Available BACKGROUND TOPALS is a new relational model for smoothing and projecting age schedules. The model is operationally simple, flexible, and transparent. OBJECTIVE This article demonstrates how TOPALS can be used for both smoothing and projecting age-specific mortality for 26 European countries and compares the results of TOPALS with those of other smoothing and projection methods. METHODS TOPALS uses a linear spline to describe the ratios between the age-specific death probabilities of a given country and a standard age schedule. For smoothing purposes I use the average of death probabilities over 15 Western European countries as standard, whereas for projection purposes I use an age schedule of 'best practice' mortality. A partial adjustment model projects how quickly the death probabilities move in the direction of the best-practice level of mortality. RESULTS On average, TOPALS performs better than the Heligman-Pollard model and the Brass relational method in smoothing mortality age schedules. TOPALS can produce projections that are similar to those of the Lee-Carter method, but can easily be used to produce alternative scenarios as well. This article presents three projections of life expectancy at birth for the year 2060 for 26 European countries. The Baseline scenario assumes a continuation of the past trend in each country, the Convergence scenario assumes that there is a common trend across European countries, and the Acceleration scenario assumes that the future decline of death probabilities will exceed that in the past. The Baseline scenario projects that average European life expectancy at birth will increase to 80 years for men and 87 years for women in 2060, whereas the Acceleration scenario projects an increase to 90 and 93 years respectively. CONCLUSIONS TOPALS is a useful new tool for demographers for both smoothing age schedules and making scenarios.
Selective Smoothed Finite Element Method
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The paper examines three selective schemes for the smoothed finite element method (SFEM) which was formulated by incorporating a cell-wise strain smoothing operation into the standard compatible finite element method (FEM). These selective SFEM schemes were formulated based on three selective integration FEM schemes with similar properties found between the number of smoothing cells in the SFEM and the number of Gaussian integration points in the FEM. Both scheme 1 and scheme 2 are free of nearly incompressible locking, but scheme 2 is more general and gives better results than scheme 1. In addition, scheme 2 can be applied to anisotropic and nonlinear situations, while scheme 1 can only be applied to isotropic and linear situations. Scheme 3 is free of shear locking. This scheme can be applied to plate and shell problems. Results of the numerical study show that the selective SFEM schemes give more accurate results than the FEM schemes.
α-compactness in smooth topological spaces
Directory of Open Access Journals (Sweden)
Chun-Kee Park
2003-01-01
Full Text Available We introduce the concepts of smooth α-closure and smooth α-interior of a fuzzy set which are generalizations of smooth closure and smooth interior of a fuzzy set defined by Demirci (1997 and obtain some of their structural properties.
Wetting on smooth micropatterned defects
Debuisson, Damien; Senez, Vincent; Arscott, Steve
2011-01-01
We develop a model which predicts the contact angle hysteresis introduced by smooth micropatterned defects. The defects are modeled by a smooth function and the contact angle hysteresis is explained using a tangent line solution. When the liquid micro-meniscus touches both sides of the defect simultaneously, depinning of the contact line occurs. The defects are fabricated using a photoresist and experimental results confirm the model. An important point is that the model is scale-independent, i.e. the contact angle hysteresis is dependent on the aspect ratio of the function, not on its absolute size; this could have implications for natural surface defects.
Bayesian versus 'plain-vanilla Bayesian' multitarget statistics
Mahler, Ronald P. S.
2004-08-01
Finite-set statistics (FISST) is a direct generalization of single-sensor, single-target Bayes statistics to the multisensor-multitarget realm, based on random set theory. Various aspects of FISST are being investigated by several research teams around the world. In recent years, however, a few partisans have claimed that a "plain-vanilla Bayesian approach" suffices as down-to-earth, "straightforward," and general "first principles" for multitarget problems. Therefore, FISST is mere mathematical "obfuscation." In this and a companion paper I demonstrate the speciousness of these claims. In this paper I summarize general Bayes statistics, what is required to use it in multisensor-multitarget problems, and why FISST is necessary to make it practical. Then I demonstrate that the "plain-vanilla Bayesian approach" is so heedlessly formulated that it is erroneous, not even Bayesian denigrates FISST concepts while unwittingly assuming them, and has resulted in a succession of algorithms afflicted by inherent -- but less than candidly acknowledged -- computational "logjams."
Institute of Scientific and Technical Information of China (English)
CHI Wen-xue; WANG Jin-feng; LI Xin-hu; ZHENG Xiao-ying; LIAO Yi-lan
2007-01-01
Objective: To estimate the prevalence rates of neural tube defects (NTDs) in Heshun County, Shanxi Province, China by Bayesian smoothing technique. Methods: A total of 80 infants in the study area who were diagnosed with NTDs were analyzed. Two mapping techniques were then used. Firstly, the GIS software ArcGIS was used to map the crude prevalence rates. Secondly,the data were smoothed by the method of empirical Bayes estimation. Results: The classical statistical approach produced an extremely dishomogeneous map, while the Bayesian map was much smoother and more interpretable. The maps produced by the Bayesian technique indicate the tendency of villages in the southeastern region to produce higher prevalence or risk values. Conclusions: The Bayesian smoothing technique addresses the issue of heterogeneity in the population at risk and it is therefore recommended for use in explorative mapping of birth defects. This approach provides procedures to identify spatial health risk levels and assists in generating hypothesis that will be investigated in further detail.
Quadratic vs cubic spline-wavelets for image representations and compression
Marais, P.C.; Blake, E.H.; Kuijk, A.A.M.
1997-01-01
The Wavelet Transform generates a sparse multi-scale signal representation which may be readily compressed. To implement such a scheme in hardware, one must have a computationally cheap method of computing the necessary transform data. The use of semi-orthogonal quadratic spline wavelets allows one
Quadratic vs cubic spline-wavelets for image representation and compression
Marais, P.C.; Blake, E.H.; Kuijk, A.A.M.
1994-01-01
The Wavelet Transform generates a sparse multi-scale signal representation which may be readily compressed. To implement such a scheme in hardware, one must have a computationally cheap method of computing the necessary ransform data. The use of semi-orthogonal quadratic spline wavelets allows one t
Least square fitting of low resolution gamma ray spectra with cubic B-spline basis functions
Institute of Scientific and Technical Information of China (English)
ZHU Meng-Hua; LIU Liang-Gang; QI Dong-Xu; YOU Zhong; XU Ao-Ao
2009-01-01
In this paper,the least square fitting method with the cubic B-spline basis hmctioas is derived to reduce the influence of statistical fluctuations in the gamma ray spectra.The derived procedure is simple and automatic.The results show that this method is better than the convolution method with a sufficient reduction of statistical fluctuation.
Study of Microwave Multiphoton Transition of Rydberg Potassium Atom by Using B-Spline
Institute of Scientific and Technical Information of China (English)
JIN Cheng; ZHOU Xiao-Xin; ZHAO Song-Feng
2005-01-01
The B-spline expansion technique and time-dependent two-level approach are applied to study the interaction between the microwave field and potassium atoms in a static electric field. We obtain theoretical multiphoton resonance spectra that can be compared with the experimental data. We also obtain the time evolution of the final state in different microwave fields.
Constructing iterative non-uniform B-spline curve and surface to fit data points
Institute of Scientific and Technical Information of China (English)
LIN Hongwei; WANG Guojin; DONG Chenshi
2004-01-01
In this paper, based on the idea of profit and loss modification, we present the iterative non-uniform B-spline curve and surface to settle a key problem in computer aided geometric design and reverse engineering, that is, constructing the curve (surface)fitting (interpolating) a given ordered point set without solving a linear system. We start with a piece of initial non-uniform B-spline curve (surface) which takes the given point set as its control point set. Then by adjusting its control points gradually with iterative formula,we can get a group of non-uniform B-spline curves (surfaces) with gradually higher precision. In this paper, using modern matrix theory, we strictly prove that the limit curve (surface) of the iteration interpolates the given point set. The non-uniform B-spline curves (surfaces) generated with the iteration have many advantages, such as satisfying the NURBS standard, having explicit expression, gaining locality, and convexity preserving,etc.
Shape Parameterization in Aircraft Design: A Novel Method, Based on B-Splines
Straathof, M.H.
2012-01-01
This thesis introduces a new parameterization technique based on the Class-Shape-Transformation (CST) method. The new technique consists of an extension to the CST method in the form of a refinement function based on B-splines. This Class-Shape-Refinement-Transformation (CSRT) method has the same ad
Fingerprint matching by thin-plate spline modelling of elastic deformations
Bazen, Asker M.; Gerez, Sabih H.
2003-01-01
This paper presents a novel minutiae matching method that describes elastic distortions in fingerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the fingerprints according to the estimated model, the number of matching
The use of B-splines in the assessment of strain levels associated with plain dents
Energy Technology Data Exchange (ETDEWEB)
Noronha Junior, Dauro B.; Martins, Ricardo R. [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas (CENPES); Jacob, Breno P.; Souza, Eduardo [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Civil. Lab. de Metodos Computacionais e Sistemas Offshore (LAMCSO)
2005-07-01
Most international pipeline codes consider plain dents injurious if they exceed a depth of 6% of the nominal pipe diameter. ASME B31.8 - Gas Transmission and Distribution Piping Systems - 2003 Edition gives an alternative to the above mentioned limit. According to this edition of the code, plain dents of any depth are acceptable provided strain levels associated with the deformation do not exceed 6% strain. In order to use the method for estimating strain in dents proposed in Appendix R of B31.8 Code, interpolation or other mathematical technique is usually necessary to develop surface contour information from in-line inspections (ILI) tools or direct information data. This paper describes the application of a piece-wise interpolating technique that makes use of fourth-order B-spline curves to approximating the dent profile in both longitudinal and circumferential directions. The results obtained using B-splines were tested against nonlinear finite analyses of dented pipelines and a distinct methodology proposed by Rosenfeld et al. (1998). The results obtained with the use of B-splines compared well with both techniques. Furthermore, the extension of the proposed methodology to the description of the topology of dents with more general shapes using B-spline surfaces is very promising. (author)
Recovery of Graded Index Profile of Planar Waveguide by Cubic Spline Function
Institute of Scientific and Technical Information of China (English)
YANG Yong; CHEN Xian-Feng; LIAO Wei-Jun; XIA Yu-Xing
2007-01-01
A method is proposed to recover the refractive index profile of graded waveguide from the effective indices by a cubic spline interpolation function. Numerical analysis of several typical index distributions show that the refractive index profile can be reconstructed closely to its exact profile by the presented interpolation model.
Cubic Trigonometric B-spline Galerkin Methods for the Regularized Long Wave Equation
Irk, Dursun; Keskin, Pinar
2016-10-01
A numerical solution of the Regularized Long Wave (RLW) equation is obtained using Galerkin finite element method, based on Crank Nicolson method for the time integration and cubic trigonometric B-spline functions for the space integration. After two different linearization techniques are applied, the proposed algorithms are tested on the problems of propagation of a solitary wave and interaction of two solitary waves.
Calculations of Electron Structure of Endohedrally Confined Helium Atom with B-Spline Type Functions
Institute of Scientific and Technical Information of China (English)
QIAO HaoXue; SHI TingYun; LI BaiWen
2002-01-01
The B-spline basis set method is used to study the properties of helium confined endohedrally at thegeometrical centre of a fullerene. The boundary conditions of the wavefunctions can be simply satisfied with thismethod. From our results, the phenomenon of "mirror collapse" is found in the case of confining helium. The interestingbehaviors of confining helium are also discussed.
Spline energy method and its application in the structural analysis of antenna reflector
Wang, Deman; Wu, Qinbao
A method is proposed for analyzing combined structures consisting of shell and beam (rib) members. The cubic B spline function is used to interpolate the displacements and the total potential energy of the shell and the ribs. The equilibrium simultaneous equations can be obtained according to the principle of minimum potential energy.
Removal of Baseline Wander Noise from Electrocardiogram (ECG using Fifth-order Spline Interpolation
Directory of Open Access Journals (Sweden)
John A. OJO
2016-10-01
Full Text Available Baseline wandering can mask some important features of the Electrocardiogram (ECG signal hence it is desirable to remove this noise for proper analysis and display of the ECG signal. This paper presents the implementation and evaluation of spline interpolation and linear phase FIR filtering methods to remove this noise. Spline interpolation method requires the QRS waves to be first detected and fifth-order (quintic interpolation technique applied to determine the smoothest curve joining several QRS points. Filtering of the ECG baseline wander was performed by using the difference between the estimated baseline wander and the noisy ECG signal. ECG signals from the MIT-BIT arrhythmia database was used to test the system, while the technique was implemented in MATLAB. The performance of the system was evaluated using Average Power (AP after filtering, Mean Square Error (MSE and the Signal to Noise Ratio (SNR. The quintic spline interpolation gave the best performance in terms of AP, MSE and SNR when compared with linear phase filtering and cubic (3rd-order spline interpolation methods.
A Unified Representation Scheme for Solid Geometric Objects Using B-splines (extended Abstract)
Bahler, D.
1985-01-01
A geometric representation scheme called the B-spline cylinder, which consists of interpolation between pairs of uniform periodic cubic B-spline curves is discussed. This approach carries a number of interesting implications. For one, a single relatively simple database schema can be used to represent a reasonably large class of objects, since the spline representation is flexible enough to allow a large domain of representable objects at very little cost in data complexity. The model is thus very storage-efficient. A second feature of such a system is that it reduces to one the number of routines which the system must support to perform a given operation on objects. Third, the scheme enables easy conversion to and from other representations. The formal definition of the cylinder entity is given. In the geometric properties of the entity are explored and several operations on such objects are defined. Some general purpose criteria for evaluating any geometric representation scheme are introduced and the B-spline cylinder scheme according to these criteria is evaluated.
A new extension algorithm for cubic B-splines based on minimal strain energy
Institute of Scientific and Technical Information of China (English)
MO Guo-liang; ZHAO Ya-nan
2006-01-01
Extension ora B-spline curve or surface is a useful function in a CAD system. This paper presents an algorithm for extending cubic B-spline curves or surfaces to one or more target points. To keep the extension curve segment GC2-continuous with the original one, a family of cubic polynomial interpolation curves can be constructed. One curve is chosen as the solution from a sub-class of such a family by setting one GC2 parameter to be zero and determining the second GC2 parameter by minimizing the strain energy. To simplify the final curve representation, the extension segment is reparameterized to achieve C2-continuity with the given B-spline curve, and then knot removal from the curve is done. As a result, a sub-optimized solution subject to the given constraints and criteria is obtained. Additionally, new control points of the extension B-spline segment can be determined by solving lower triangular linear equations. Some computing examples for comparing our method and other methods are given.
Application of Cubic Spline in the Implementation of Braces for the Case of a Child
Directory of Open Access Journals (Sweden)
Azmin Sham Rambely
2012-01-01
Full Text Available Problem statement: Orthodontic teeth movement is influenced by the characteristics of the applied force, including its magnitude and direction which normally based on the shape of ellipsoid, parabolic and U-shape that are symmetry. However, this will affect the movement of the whole set of tooth. Approach: This study intends to compare the form of general teeth with another method called cubic spline to get a minimum error in presenting the general form of teeth. Cubic spline method is applied in a mathematical model of a childâs teeth, which is produced through resignation of orthodontic wires. It is also meant to create a clear view towards the true nature of orthodontic wires. Results: Based on mathematical characteristics in the spline and the real data of a teethâs model, cubic spline shows to be very useful in reflecting the shape of a curve because the dots chosen are not totally symmetry. Conclusion/Recommendation: Therefore, symmetrical curve can be produced in teethâs shape which is basically asymmetry.
Numerical solution of functional integral equations by using B-splines
Directory of Open Access Journals (Sweden)
Reza Firouzdor
2014-05-01
Full Text Available This paper describes an approximating solution, based on Lagrange interpolation and spline functions, to treat functional integral equations of Fredholm type and Volterra type. This method can be extended to functional dierential and integro-dierential equations. For showing eciency of the method we give some numerical examples.
Bayesian priors for transiting planets
Kipping, David M
2016-01-01
As astronomers push towards discovering ever-smaller transiting planets, it is increasingly common to deal with low signal-to-noise ratio (SNR) events, where the choice of priors plays an influential role in Bayesian inference. In the analysis of exoplanet data, the selection of priors is often treated as a nuisance, with observers typically defaulting to uninformative distributions. Such treatments miss a key strength of the Bayesian framework, especially in the low SNR regime, where even weak a priori information is valuable. When estimating the parameters of a low-SNR transit, two key pieces of information are known: (i) the planet has the correct geometric alignment to transit and (ii) the transit event exhibits sufficient signal-to-noise to have been detected. These represent two forms of observational bias. Accordingly, when fitting transits, the model parameter priors should not follow the intrinsic distributions of said terms, but rather those of both the intrinsic distributions and the observational ...
Bayesian approach to rough set
Marwala, Tshilidzi
2007-01-01
This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.
Deep Learning and Bayesian Methods
Directory of Open Access Journals (Sweden)
Prosper Harrison B.
2017-01-01
Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.
Deep Learning and Bayesian Methods
Prosper, Harrison B.
2017-03-01
A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.
Bayesian Source Separation and Localization
Knuth, K H
1998-01-01
The problem of mixed signals occurs in many different contexts; one of the most familiar being acoustics. The forward problem in acoustics consists of finding the sound pressure levels at various detectors resulting from sound signals emanating from the active acoustic sources. The inverse problem consists of using the sound recorded by the detectors to separate the signals and recover the original source waveforms. In general, the inverse problem is unsolvable without additional information. This general problem is called source separation, and several techniques have been developed that utilize maximum entropy, minimum mutual information, and maximum likelihood. In previous work, it has been demonstrated that these techniques can be recast in a Bayesian framework. This paper demonstrates the power of the Bayesian approach, which provides a natural means for incorporating prior information into a source model. An algorithm is developed that utilizes information regarding both the statistics of the amplitudes...
Bayesian Inference for Radio Observations
Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin
2015-01-01
(Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...
Bayesian inference on proportional elections.
Directory of Open Access Journals (Sweden)
Gabriel Hideki Vatanabe Brunello
Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
Sinha, Samiran
2009-08-10
We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.
Sarkar, Abhra
2014-10-02
We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.
Dynamic smoothing of nanocomposite films
Pei, Y.T.; Turkin, A; Chen, C.Q.; Shaha, K.P.; Vainshtein, D.; Hosson, J.Th.M. De
2010-01-01
In contrast to the commonly observed dynamic roughening in film growth we have observed dynamic smoothing in the growth of diamondlike-carbon nanocomposite (TiC/a-C) films up to 1.5 mu m thickness. Analytical and numerical simulations, based on the Edwards-Wilkinson model and the Mullins model, visu
Nonlinear smoothing for random fields
Aihara, Shin Ichi; Bagchi, Arunabha
1995-01-01
Stochastic nonlinear elliptic partial differential equations with white noise disturbances are studied in the countably additive measure set up. Introducing the Onsager-Machlup function to the system model, the smoothing problem for maximizing the modified likelihood functional is solved and the exp
Diakogiannis, Foivos I; Ibata, Rodrigo A
2014-01-01
The spherical Jeans equation is widely used to estimate the mass content of a stellar systems with apparent spherical symmetry. However, this method suffers from a degeneracy between the assumed mass density and the kinematic anisotropy profile, $\\beta(r)$. In a previous work, we laid the theoretical foundations for an algorithm that combines smoothing B-splines with equations from dynamics to remove this degeneracy. Specifically, our method reconstructs a unique kinematic profile of $\\sigma_{rr}^2$ and $\\sigma_{tt}^2$ for an assumed free functional form of the potential and mass density $(\\Phi,\\rho)$ and given a set of observed line-of-sight velocity dispersion measurements, $\\sigma_{los}^2$. In Paper I (submitted to MNRAS: MN-14-0101-MJ) we demonstrated the efficiency of our algorithm with a very simple example and we commented on the need for optimum smoothing of the B-spline representation; this is in order to avoid unphysical variational behaviour when we have large uncertainty in our data. In the curren...
Bayesian analysis for kaon photoproduction
Energy Technology Data Exchange (ETDEWEB)
Marsainy, T., E-mail: tmart@fisika.ui.ac.id; Mart, T., E-mail: tmart@fisika.ui.ac.id [Department Fisika, FMIPA, Universitas Indonesia, Depok 16424 (Indonesia)
2014-09-25
We have investigated contribution of the nucleon resonances in the kaon photoproduction process by using an established statistical decision making method, i.e. the Bayesian method. This method does not only evaluate the model over its entire parameter space, but also takes the prior information and experimental data into account. The result indicates that certain resonances have larger probabilities to contribute to the process.
Bayesian priors and nuisance parameters
Gupta, Sourendu
2016-01-01
Bayesian techniques are widely used to obtain spectral functions from correlators. We suggest a technique to rid the results of nuisance parameters, ie, parameters which are needed for the regularization but cannot be determined from data. We give examples where the method works, including a pion mass extraction with two flavours of staggered quarks at a lattice spacing of about 0.07 fm. We also give an example where the method does not work.
Space Shuttle RTOS Bayesian Network
Morris, A. Terry; Beling, Peter A.
2001-01-01
With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores
Elements of Bayesian experimental design
Energy Technology Data Exchange (ETDEWEB)
Sivia, D.S. [Rutherford Appleton Lab., Oxon (United Kingdom)
1997-09-01
We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.
Bayesian Sampling using Condition Indicators
DEFF Research Database (Denmark)
Faber, Michael H.; Sørensen, John Dalsgaard
2002-01-01
. This allows for a Bayesian formulation of the indicators whereby the experience and expertise of the inspection personnel may be fully utilized and consistently updated as frequentistic information is collected. The approach is illustrated on an example considering a concrete structure subject to corrosion....... It is shown how half-cell potential measurements may be utilized to update the probability of excessive repair after 50 years....
Cubic Shape Preserving Rational Spline and Its Offset Curve%保形三次有理样条及其等距曲线
Institute of Scientific and Technical Information of China (English)
刘文艳; 王强; 张养聪
2012-01-01
为了使有理插值样条及其等距曲线在工业设计、制造及计算机图形和CAGD领域有着更灵活更广泛的应用,构造含参数三次有理插值样条模型,生成插值有限个离散点的光滑曲线及其等距线,该模型可通过选取其中的形状参数使得曲线具有保形性并达到一阶连续.并可通过适当调整插值函数中的参数进行交互式的修改,以得到满意的曲线及等距线,并可结合细分算法达到要求的逼近精度.%In order to make rational interpolating spline and its equidistant curve to be more flexibly and more widely used in industrial design, manufacturing, computer graphics and CAGD fields, cubic rational interpolation spline model with parameters was constructed to generate a smooth interpolation curve and its equidistant line by finite number of discrete point data. In the model curve can be with shape retention and first - order continuity by select the shape parameters. The interactive modification can be carried out by appropriate adjustment of the parameters in the interpolation function, to obtain satisfied curve and its equidistant line, and with segmentation algorithm to meet the requirements of the approximation precision.
Institute of Scientific and Technical Information of China (English)
李军成; 刘成志; 易叶青
2016-01-01
针对三次 Cardinal 样条与 Catmull-Rom 样条的不足，提出带形状因子的 C2连续五次 Cardinal 样条与Catmull-Rom样条。首先构造一组带2个形状因子的五次Cardinal样条基函数；然后基于该组基函数定义带形状因子的五次 Cardinal 样条曲线与曲面，并讨论五次 Cardinal 样条函数的保单调插值；最后研究对应的一元与二元五次Catmull-Rom样条插值函数，并给出最优一元与二元五次Catmull-Rom样条插值函数的确定方法。实例结果表明，五次Cardinal样条与Catmull-Rom样条无需任何条件即可达到C2连续，且其形状还可通过自带的形状因子进行灵活地调整，利用最优五次Catmull-Rom样条插值函数可获得满意的插值效果。%In view of the deficiency of the cubic Cardinal spline and Catmull-Rom spline, theC2 continuous quintic Cardinal spline and Catmull-Rom spline with shape factors are presented in this paper. First, a class of quitic Cardinal spline basis functions with two shape factors is constructed. Then, the quintic Cardinal spline curves and surfaces with shape factors are defined on base of the proposed basis functions, and the monotonicity-preserving interpolation with the quintic Cardinal spline function is discussed. Finally, the corresponding one dimensional and two dimensional quintic Catmull-Rom spline interpolation functions are studied, and the method of determining the optimal one dimensional and two dimensional quintic Cat-mull-Rom spline interpolation functions are given. Example results show that, the quintic Cardinal spline and Catmull-Rom spline can not only beC2 continuous without any conditions, but also can be flexibly ad-justed by the shape factors. Satisfactory interpolation results can be obtained by using the optimal quintic Catmull-Rom spline interpolation functions.
Geometric Construction of Algebraic Hyperbolic B-Spline%代数双曲B-样条的几何构造
Institute of Scientific and Technical Information of China (English)
朱平; 汪国昭
2009-01-01
样条曲线的升阶是CAD系统相互沟通必不可少的手段之一.由于双阶样条的升阶算法具有割角性质,因此具有鲜明的几何意义.以代数双曲B-样条为例,证明了样条曲线经过不断升阶之后,其控制多边形序列会像Bézier曲线一样收敛到初始的代数双曲B-样条曲线.利用文中得到的结果,就可以像Bézier曲线一样,通过几何割角法生成B-样条曲线﹑双曲线﹑悬链线等常用曲线.%Degree elevation of spline curves is an essential technique for communication between CAD systems. Since degree elevation algorithm by bi-order Spline can be interpreted as corner cutting process, degree elevation of Spline curve has obvious geometric meaning. Taking algebraic hyperbolic B-spline curve as an example, it is proved that Spline curve's control polygon sequence will converge to the initial algebraic hyperbolic B-spline curve after degree elevation continually. By this conclusion, common curves including B-spline, hyperbola and catenary curves can be obtained by geometric corner cutting as Bézier curves.
Fatigue crack growth monitoring of idealized gearbox spline component using acoustic emission
Zhang, Lu; Ozevin, Didem; Hardman, William; Kessler, Seth; Timmons, Alan
2016-04-01
The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. The acoustic emission (AE) method is a direct way of detecting active flaws; however, the method suffers from the influence of background noise and location/sensor based pattern recognition method. It is important to identify the source mechanism and adapt it to different test conditions and sensors. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method in a laboratory environment. The test sample has the major details of the spline component on a flattened geometry. The AE data is continuously collected together with strain gauges strategically positions on the structure. The fatigue test characteristics are 4 Hz frequency and 0.1 as the ratio of minimum to maximum loading in tensile regime. It is observed that there are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The frequency spectra of continuous emissions and burst emissions are compared to understand the difference of sudden crack growth and gradual crack growth. The predicted crack growth rate is compared with the AE data using the cumulative AE events at the notch tip. The source mechanism of sudden crack growth is obtained solving the inverse mathematical problem from output signal to input signal. The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method The AE data is continuously collected together with strain gauges. There are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The source mechanism of
Cervical cancer survival prediction using hybrid of SMOTE, CART and smooth support vector machine
Purnami, S. W.; Khasanah, P. M.; Sumartini, S. H.; Chosuvivatwong, V.; Sriplung, H.
2016-04-01
According to the WHO, every two minutes there is one patient who died from cervical cancer. The high mortality rate is due to the lack of awareness of women for early detection. There are several factors that supposedly influence the survival of cervical cancer patients, including age, anemia status, stage, type of treatment, complications and secondary disease. This study wants to classify/predict cervical cancer survival based on those factors. Various classifications methods: classification and regression tree (CART), smooth support vector machine (SSVM), three order spline SSVM (TSSVM) were used. Since the data of cervical cancer are imbalanced, synthetic minority oversampling technique (SMOTE) is used for handling imbalanced dataset. Performances of these methods are evaluated using accuracy, sensitivity and specificity. Results of this study show that balancing data using SMOTE as preprocessing can improve performance of classification. The SMOTE-SSVM method provided better result than SMOTE-TSSVM and SMOTE-CART.
Numerical simulation of liquid jet breakup using smoothed particle hydrodynamics (SPH)
Pourabdian, Majid; Morad, Mohammad Reza
2016-01-01
In this paper, breakup of liquid jet is simulated using smoothed particle hydrodynamics (SPH) which is a meshless Lagrangian numerical method. For this aim, flow governing equations are discretized based on SPH method. In this paper, SPHysics open source code has been utilized for numerical solutions. Therefore, the mentioned code has been developed by adding the surface tension effects. The proposed method is then validated using dam break with obstacle problem. Finally, simulation of twodimensional liquid jet flow is carried out and its breakup behavior considering one-phase flow is investigated. Length of liquid breakup in Rayleigh regime is calculated for various flow conditions such as different Reynolds and Weber numbers and the results are validated by an experimental correlation. The whole numerical solutions are accomplished for both Wendland and cubic spline kernel functions and Wendland kernel function gave more accurate results. The results are compared to MPS method for inviscid liquid as well. T...
12th Brazilian Meeting on Bayesian Statistics
Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo
2015-01-01
Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...
Bayesian Inversion of Seabed Scattering Data
2014-09-30
Bayesian Inversion of Seabed Scattering Data (Special Research Award in Ocean Acoustics) Gavin A.M.W. Steininger School of Earth & Ocean...project are to carry out joint Bayesian inversion of scattering and reflection data to estimate the in-situ seabed scattering and geoacoustic parameters...valid OMB control number. 1. REPORT DATE 30 SEP 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Bayesian
Anomaly Detection and Attribution Using Bayesian Networks
2014-06-01
UNCLASSIFIED Anomaly Detection and Attribution Using Bayesian Networks Andrew Kirk, Jonathan Legg and Edwin El-Mahassni National Security and...detection in Bayesian networks , en- abling both the detection and explanation of anomalous cases in a dataset. By exploiting the structure of a... Bayesian network , our algorithm is able to efficiently search for local maxima of data conflict between closely related vari- ables. Benchmark tests using
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan
2004-01-01
We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...... and differentiating these circuits in time linear in their size. We report on experimental results showing the successful compilation, and efficient inference, on relational Bayesian networks whose {\\primula}--generated propositional instances have thousands of variables, and whose jointrees have clusters...
Institute of Scientific and Technical Information of China (English)
XIONG Lei; LI haijiao; ZHANG Lewen
2008-01-01
The fourth-order B spline wavelet scaling functions are used to solve the two-dimensional unsteady diffusion equation. The calculations from a case history indicate that the method provides high accuracy and the computational efficiency is enhanced due to the small matrix derived from this method.The respective features of 3-spline wavelet scaling functions, 4-spline wavelet scaling functions and quasi-wavelet used to solve the two-dimensional unsteady diffusion equation are compared. The proposed method has potential applications in many fields including marine science.
SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE
Institute of Scientific and Technical Information of China (English)
Ming HAN; Yuanyao DING
2004-01-01
This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.
Learning dynamic Bayesian networks with mixed variables
DEFF Research Database (Denmark)
Bøttcher, Susanne Gammelgaard
This paper considers dynamic Bayesian networks for discrete and continuous variables. We only treat the case, where the distribution of the variables is conditional Gaussian. We show how to learn the parameters and structure of a dynamic Bayesian network and also how the Markov order can be learned....... An automated procedure for specifying prior distributions for the parameters in a dynamic Bayesian network is presented. It is a simple extension of the procedure for the ordinary Bayesian networks. Finally the W¨olfer?s sunspot numbers are analyzed....
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
Very Smooth Points of Spaces of Operators
Indian Academy of Sciences (India)
T S S R K Rao
2003-02-01
In this paper we study very smooth points of Banach spaces with special emphasis on spaces of operators. We show that when the space of compact operators is an -ideal in the space of bounded operators, a very smooth operator attains its norm at a unique vector (up to a constant multiple) and ( ) is a very smooth point of the range space. We show that if for every equivalent norm on a Banach space, the dual unit ball has a very smooth point then the space has the Radon–Nikodým property. We give an example of a smooth Banach space without any very smooth points.
A Comparison of Two Smoothing Methods for Word Bigram Models
Petö, L B
1994-01-01
A COMPARISON OF TWO SMOOTHING METHODS FOR WORD BIGRAM MODELS Linda Bauman Peto Department of Computer Science University of Toronto Abstract Word bigram models estimated from text corpora require smoothing methods to estimate the probabilities of unseen bigrams. The deleted estimation method uses the formula: Pr(i|j) = lambda f_i + (1-lambda)f_i|j, where f_i and f_i|j are the relative frequency of i and the conditional relative frequency of i given j, respectively, and lambda is an optimized parameter. MacKay (1994) proposes a Bayesian approach using Dirichlet priors, which yields a different formula: Pr(i|j) = (alpha/F_j + alpha) m_i + (1 - alpha/F_j + alpha) f_i|j where F_j is the count of j and alpha and m_i are optimized parameters. This thesis describes an experiment in which the two methods were trained on a two-million-word corpus taken from the Canadian _Hansard_ and compared on the basis of the experimental perplexity that they assigned to a shared test corpus. The methods proved to be about equally ...
Smoothing of mixed complementarity problems
Energy Technology Data Exchange (ETDEWEB)
Gabriel, S.A.; More, J.J. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.
1995-09-01
The authors introduce a smoothing approach to the mixed complementarity problem, and study the limiting behavior of a path defined by approximate minimizers of a nonlinear least squares problem. The main result guarantees that, under a mild regularity condition, limit points of the iterates are solutions to the mixed complementarity problem. The analysis is applicable to a wide variety of algorithms suitable for large-scale mixed complementarity problems.
BayesLine: Bayesian Inference for Spectral Estimation of Gravitational Wave Detector Noise
Littenberg, Tyson B
2014-01-01
Gravitational wave data from ground-based detectors is dominated by instrument noise. Signals will be comparatively weak, and our understanding of the noise will influence detection confidence and signal characterization. Mis-modeled noise can produce large systematic biases in both model selection and parameter estimation. Here we introduce a multi-component, variable dimension, parameterized model to describe the Gaussian-noise power spectrum for data from ground-based gravitational wave interferometers. Called BayesLine, the algorithm models the noise power spectral density using cubic splines for smoothly varying broad-band noise and Lorentzians for narrow-band line features in the spectrum. We describe the algorithm and demonstrate its performance on data from the fifth and sixth LIGO science runs. Once fully integrated into LIGO/Virgo data analysis software, BayesLine will produce accurate spectral estimation and provide a means for marginalizing inferences drawn from the data over all plausible noise s...
Maximal right smooth extension chains
Huang, Yun Bao
2010-01-01
If $w=u\\alpha$ for $\\alpha\\in \\Sigma=\\{1,2\\}$ and $u\\in \\Sigma^*$, then $w$ is said to be a \\textit{simple right extension}of $u$ and denoted by $u\\prec w$. Let $k$ be a positive integer and $P^k(\\epsilon)$ denote the set of all $C^\\infty$-words of height $k$. Set $u_{1},\\,u_{2},..., u_{m}\\in P^{k}(\\epsilon)$, if $u_{1}\\prec u_{2}\\prec ...\\prec u_{m}$ and there is no element $v$ of $P^{k}(\\epsilon)$ such that $v\\prec u_{1}\\text{or} u_{m}\\prec v$, then $u_{1}\\prec u_{2}\\prec...\\prec u_{m}$ is said to be a \\textit{maximal right smooth extension (MRSE) chains}of height $k$. In this paper, we show that \\textit{MRSE} chains of height $k$ constitutes a partition of smooth words of height $k$ and give the formula of the number of \\textit{MRSE} chains of height $k$ for each positive integer $k$. Moreover, since there exist the minimal height $h_1$ and maximal height $h_2$ of smooth words of length $n$ for each positive integer $n$, we find that \\textit{MRSE} chains of heights $h_1-1$ and $h_2+1$ are good candidates t...
Bayesian Methods and Universal Darwinism
Campbell, John
2009-12-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.
Bayesian phylogeography finds its roots.
Directory of Open Access Journals (Sweden)
Philippe Lemey
2009-09-01
Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.
Pharmacology of airway smooth muscle proliferation
Gosens, Reinoud; Roscioni, Sara S.; Dekkers, Bart G. J.; Pera, Tonio; Schmidt, Martina; Schaafsma, Dedmer; Zaagsma, Johan; Meurs, Herman
2008-01-01
Airway smooth muscle thickening is a pathological feature that contributes significantly to airflow limitation and airway hyperresponsiveness in asthma. Ongoing research efforts aimed at identifying the mechanisms responsible for the increased airway smooth muscle mass have indicated that hyperplasi
Mobile real-time EEG imaging Bayesian inference with sparse, temporally smooth source priors
DEFF Research Database (Denmark)
Hansen, Lars Kai; Hansen, Sofie Therese; Stahlhut, Carsten
2013-01-01
EEG based real-time imaging of human brain function has many potential applications including quality control, in-line experimental design, brain state decoding, and neuro-feedback. In mobile applications these possibilities are attractive as elements in systems for personal state monitoring...
Bayesian Query-Focused Summarization
Daumé, Hal
2009-01-01
We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.
Numeracy, frequency, and Bayesian reasoning
Directory of Open Access Journals (Sweden)
Gretchen B. Chapman
2009-02-01
Full Text Available Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed.
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
2013-01-01
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Bayesian homeopathy: talking normal again.
Rutten, A L B
2007-04-01
Homeopathy has a communication problem: important homeopathic concepts are not understood by conventional colleagues. Homeopathic terminology seems to be comprehensible only after practical experience of homeopathy. The main problem lies in different handling of diagnosis. In conventional medicine diagnosis is the starting point for randomised controlled trials to determine the effect of treatment. In homeopathy diagnosis is combined with other symptoms and personal traits of the patient to guide treatment and predict response. Broadening our scope to include diagnostic as well as treatment research opens the possibility of multi factorial reasoning. Adopting Bayesian methodology opens the possibility of investigating homeopathy in everyday practice and of describing some aspects of homeopathy in conventional terms.
Institute of Scientific and Technical Information of China (English)
Elisabetta Santi; M.G. Cimoroni
2002-01-01
In this paper, product formulas based on projector-splines for the numerical evaluation of 2-D CPV integrals are proposed. Convergence results are proved, numerical examples and comparisons are given.
Semi-Huber Quadratic Function and Comparative Study of Some MRFs for Bayesian Image Restoration
De la Rosa, J. I.; Villa-Hernández, J.; González-Ramírez, E.; De la Rosa, M. E.; Gutiérrez, O.; Olvera-Olvera, C.; Castañeda-Miranda, R.; Fleury, G.
2013-10-01
The present work introduces an alternative method to deal with digital image restoration into a Bayesian framework, particularly, the use of a new half-quadratic function is proposed which performance is satisfactory compared with respect to some other functions in existing literature. The bayesian methodology is based on the prior knowledge of some information that allows an efficient modelling of the image acquisition process. The edge preservation of objects into the image while smoothing noise is necessary in an adequate model. Thus, we use a convexity criteria given by a semi-Huber function to obtain adequate weighting of the cost functions (half-quadratic) to be minimized. The principal objective when using Bayesian methods based on the Markov Random Fields (MRF) in the context of image processing is to eliminate those effects caused by the excessive smoothness on the reconstruction process of image which are rich in contours or edges. A comparison between the new introduced scheme and other three existing schemes, for the cases of noise filtering and image deblurring, is presented. This collection of implemented methods is inspired of course on the use of MRFs such as the semi-Huber, the generalized Gaussian, the Welch, and Tukey potential functions with granularity control. The obtained results showed a satisfactory performance and the effectiveness of the proposed estimator with respect to other three estimators.
Smooth Optimization Approach for Sparse Covariance Selection
Lu, Zhaosong
2009-01-01
In this paper we first study a smooth optimization approach for solving a class of nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations. In particular, we apply Nesterov's smooth optimization technique [Y.E. Nesterov, Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543--547; Y. E. Nesterov, Math. Programming, 103 (2005), pp. 127--152] to their dual counterparts that are smooth convex problems. It is shown that the resulting approach...
Bayesian credible interval construction for Poisson statistics
Institute of Scientific and Technical Information of China (English)
ZHU Yong-Sheng
2008-01-01
The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.
Advances in Bayesian Modeling in Educational Research
Levy, Roy
2016-01-01
In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…
Nonparametric Bayesian Modeling of Complex Networks
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard; Mørup, Morten
2013-01-01
Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...... for complex networks can be derived and point out relevant literature....