L1 Control Theoretic Smoothing Splines
Nagahara, Masaaki; Martin, Clyde F.
2014-01-01
In this paper, we propose control theoretic smoothing splines with L1 optimality for reducing the number of parameters that describes the fitted curve as well as removing outlier data. A control theoretic spline is a smoothing spline that is generated as an output of a given linear dynamical system. Conventional design requires exactly the same number of base functions as given data, and the result is not robust against outliers. To solve these problems, we propose to use L1 optimality, that ...
Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping
2015-06-24
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
Spline smoothing of histograms by linear programming
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Spline-Based Smoothing of Airfoil Curvatures
Li, W.; Krist, S.
2008-01-01
Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been
Upsilon-quaternion splines for the smooth interpolation of orientations.
Nielson, Gregory M
2004-01-01
We present a new method for smoothly interpolating orientation matrices. It is based upon quaternions and a particular construction of upsilon-spline curves. The new method has tension parameters and variable knot (time) spacing which both prove to be effective in designing and controlling key frame animations. PMID:15384647
Comparative Analysis for Robust Penalized Spline Smoothing Methods
Directory of Open Access Journals (Sweden)
Bin Wang
2014-01-01
Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.
Clifford, Sam; Choy, Sama Low; Corander, Jukka; Hämeri, Kaarle; Mengersen, Kerrie; Hussein, Tareq
2012-01-01
In this paper we develop a semi-parametric Bayesian regression model for forecasting from a model of temporal trends, covariates and autocorrelated residuals. Non-linear covariate effects and their interactions are included in the model via penalised B-splines with an informative smoothing prior. Forecasting is consistent with the estimates of residual autocorrelation and spline coefficients are conditioned on the smoothing and autoregression parameters. The developed model is applied to the problem of forecasting ultrafine particle number concentration (PNC) in Helsinki, Finland. We obtain an estimate of the joint annual and daily trends, describing the changes in hourly PNC concentration, as well as weekly trends and the effect of traffic and local meteorological conditions.
P-splines with derivative based penalties and tensor product smoothing of unevenly distributed data
Wood, Simon N
2016-01-01
The P-splines of Eilers and Marx (Stat Sci 11:89–121, 1996) combine a B-spline basis with a discrete quadratic penalty on the basis coefficients, to produce a reduced rank spline like smoother. P-splines have three properties that make them very popular as reduced rank smoothers: (i) the basis and the penalty are sparse, enabling efficient computation, especially for Bayesian stochastic simulation; (ii) it is possible to flexibly ‘mix-and-match’ the order of B-spline basis and penalty, rather...
International Nuclear Information System (INIS)
The scientific and application-oriented interest in the Laplace transform and its inversion is testified by more than 1000 publications in the last century. Most of the inversion algorithms available in the literature assume that the Laplace transform function is available everywhere. Unfortunately, such an assumption is not fulfilled in the applications of the Laplace transform. Very often, one only has a finite set of data and one wants to recover an estimate of the inverse Laplace function from that. We propose a fitting model of data. More precisely, given a finite set of measurements on the real axis, arising from an unknown Laplace transform function, we construct a dth degree generalized polynomial smoothing spline, where d = 2m − 1, such that internally to the data interval it is a dth degree polynomial complete smoothing spline minimizing a regularization functional, and outside the data interval, it mimics the Laplace transform asymptotic behavior, i.e. it is a rational or an exponential function (the end behavior model), and at the boundaries of the data set it joins with regularity up to order m − 1, with the end behavior model. We analyze in detail the generalized polynomial smoothing spline of degree d = 3. This choice was motivated by the (ill)conditioning of the numerical computation which strongly depends on the degree of the complete spline. We prove existence and uniqueness of this spline. We derive the approximation error and give a priori and computable bounds of it on the whole real axis. In such a way, the generalized polynomial smoothing spline may be used in any real inversion algorithm to compute an approximation of the inverse Laplace function. Experimental results concerning Laplace transform approximation, numerical inversion of the generalized polynomial smoothing spline and comparisons with the exponential smoothing spline conclude the work. (paper)
Material approximation of data smoothing and spline curves inspired by slime mould
International Nuclear Information System (INIS)
The giant single-celled slime mould Physarum polycephalum is known to approximate a number of network problems via growth and adaptation of its protoplasmic transport network and can serve as an inspiration towards unconventional, material-based computation. In Physarum, predictable morphological adaptation is prevented by its adhesion to the underlying substrate. We investigate what possible computations could be achieved if these limitations were removed and the organism was free to completely adapt its morphology in response to changing stimuli. Using a particle model of Physarum displaying emergent morphological adaptation behaviour, we demonstrate how a minimal approach to collective material computation may be used to transform and summarise properties of spatially represented datasets. We find that the virtual material relaxes more strongly to high-frequency changes in data, which can be used for the smoothing (or filtering) of data by approximating moving average and low-pass filters in 1D datasets. The relaxation and minimisation properties of the model enable the spatial computation of B-spline curves (approximating splines) in 2D datasets. Both clamped and unclamped spline curves of open and closed shapes can be represented, and the degree of spline curvature corresponds to the relaxation time of the material. The material computation of spline curves also includes novel quasi-mechanical properties, including unwinding of the shape between control points and a preferential adhesion to longer, straighter paths. Interpolating splines could not directly be approximated due to the formation and evolution of Steiner points at narrow vertices, but were approximated after rectilinear pre-processing of the source data. This pre-processing was further simplified by transforming the original data to contain the material inside the polyline. These exemplary results expand the repertoire of spatially represented unconventional computing devices by demonstrating a
Zhang, X.; Liang, S.; Wang, G.
2015-12-01
Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.
Smoothing spline ANOVA for super-large samples: Scalable computation via rounding parameters
Helwig, Nathaniel E.; Ma, Ping
2016-01-01
In the current era of big data, researchers routinely collect and analyze data of super-large sample sizes. Data-oriented statistical methods have been developed to extract information from super-large data. Smoothing spline ANOVA (SSANOVA) is a promising approach for extracting information from noisy data; however, the heavy computational cost of SSANOVA hinders its wide application. In this paper, we propose a new algorithm for fitting SSANOVA models to super-large sample data. In this algo...
Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.
Kainu, Annette; Timonen, Kirsi
2016-07-01
Background Discontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers. Methods Linear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values. Results Polynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of -0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean. Conclusions Polynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values. PMID:27071737
Smoothing spline analysis of variance approach for global sensitivity analysis of computer codes
International Nuclear Information System (INIS)
The paper investigates a nonparametric regression method based on smoothing spline analysis of variance (ANOVA) approach to address the problem of global sensitivity analysis (GSA) of complex and computationally demanding computer codes. The two steps algorithm of this method involves an estimation procedure and a variable selection. The latter can become computationally demanding when dealing with high dimensional problems. Thus, we proposed a new algorithm based on Landweber iterations. Using the fact that the considered regression method is based on ANOVA decomposition, we introduced a new direct method for computing sensitivity indices. Numerical tests performed on several analytical examples and on an application from petroleum reservoir engineering showed that the method gives competitive results compared to a more standard Gaussian process approach
An Implementation of Bayesian Adaptive Regression Splines (BARS in C with S and R Wrappers
Directory of Open Access Journals (Sweden)
Garrick Wallstrom
2007-02-01
Full Text Available BARS (DiMatteo, Genovese, and Kass 2001 uses the powerful reversible-jump MCMC engine to perform spline-based generalized nonparametric regression. It has been shown to work well in terms of having small mean-squared error in many examples (smaller than known competitors, as well as producing visually-appealing fits that are smooth (filtering out high-frequency noise while adapting to sudden changes (retaining high-frequency signal. However, BARS is computationally intensive. The original implementation in S was too slow to be practical in certain situations, and was found to handle some data sets incorrectly. We have implemented BARS in C for the normal and Poisson cases, the latter being important in neurophysiological and other point-process applications. The C implementation includes all needed subroutines for fitting Poisson regression, manipulating B-splines (using code created by Bates and Venables, and finding starting values for Poisson regression (using code for density estimation created by Kooperberg. The code utilizes only freely-available external libraries (LAPACK and BLAS and is otherwise self-contained. We have also provided wrappers so that BARS can be used easily within S or R.
Directory of Open Access Journals (Sweden)
П.О. Приставка
2008-03-01
Full Text Available This article is the solution of practical research of the polynomial splines of one variable based on the B-splines that, on average, are related to the interpolar. These splines allow us to get simple calculating schemes which are convenient for the practical application for non-binary subdivisions.
Directory of Open Access Journals (Sweden)
D. Reclik
2008-08-01
Full Text Available Purpose: The main reason of this paper was to prepare the system, which tests the use of elastic band for smoothing the collision-free trajectory. The aided robot off-line programming system is based on NURBS and B-Spline curves. Because there is a lot of information in references about using elastic band algorithm, authors decided to compare these two methods. The most important criterion in robotics is having the smoothest possible robot trajectory, so as a standard there the NURBS curves (C2 smooth class were used.Design/methodology/approach: Pascal language compiler was used for research. All algorithms were coded in this programming language and compiled. Results were set in Microsoft Excel worksheet.Findings: Results show that calculations, which were made with B-Spline method, have taken less time than calculations based on elastic band curves. Moreover, the elastic band method gave the smoothest curves but only in geometrical sense, which is less important (the first and second derivate are not continuous, which is the most important issue in presented case. That is why it was found that using the B-Spline algorithm is a better solution, because it takes less time and gives better quality results.Research limitations/implications: The MS Windows application was created, which generates smooth curves (in geometrical sense by marking the interpolation base points which are calculated by the collision-free movement planner. This application generates curves by using both presented methods - B-Spline and elastic band. Both of these curves were compared in regard of standard deviation and variance of B-Spline and elastic band.Practical implications: Because the elastic band algorithm takes a lot of time (three times longer than B-Spline it is not used in the final application. The authors used B-Spline method to make smoother and optimized trajectory in application for off-line collision-free robot programming.Originality/value: This is a new
An ultrasound study of Canadian French rhotic vowels with polar smoothing spline comparisons.
Mielke, Jeff
2015-05-01
This is an acoustic and articulatory study of Canadian French rhotic vowels, i.e., mid front rounded vowels /ø œ̃ œ/ produced with a rhotic perceptual quality, much like English [ɚ] or [ɹ], leading heureux, commun, and docteur to sound like [ɚʁɚ], [kɔmɚ̃], and [dɔktaɹʁ]. Ultrasound, video, and acoustic data from 23 Canadian French speakers are analyzed using several measures of mid-sagittal tongue contours, showing that the low F3 of rhotic vowels is achieved using bunched and retroflex tongue postures and that the articulatory-acoustic mapping of F1 and F2 are rearranged in systems with rhotic vowels. A subset of speakers' French vowels are compared with their English [ɹ]/[ɚ], revealing that the French vowels are consistently less extreme in low F3 and its articulatory correlates, even for the most rhotic speakers. Polar coordinates are proposed as a replacement for Cartesian coordinates in calculating smoothing spline comparisons of mid-sagittal tongue shapes, because they enable comparisons to be roughly perpendicular to the tongue surface, which is critical for comparisons involving tongue root position but appropriate for all comparisons involving mid-sagittal tongue contours. PMID:25994713
Wahba, Grace
2004-01-01
Smoothing Spline ANOVA (SS-ANOVA) models in reproducing kernel Hilbert spaces (RKHS) provide a very general framework for data analysis, modeling and learning in a variety of fields. Discrete, noisy scattered, direct and indirect observations can be accommodated with multiple inputs and multiple possibly correlated outputs and a variety of meaningful structures. The purpose of this paper is to give a brief overview of the approach and describe and contrast a series of applications, while noti...
Directory of Open Access Journals (Sweden)
Saad Bakkali
2010-04-01
Full Text Available This paper focuses on presenting a method which is able to filter out noise and suppress outliers of sampled real functions under fairly general conditions. The automatic optimal spline-smoothing approach automatically determi-nes how a cubic spline should be adjusted in a least-squares optimal sense from an a priori selection of the number of points defining an adjusting spline, but not their location on that curve. The method is fast and easily allowed for selecting several knots, thereby adding desirable flexibility to the procedure. As an illustration, we apply the AOSSA method to Moroccan resistivity data phosphate deposit “disturbances” map. The AOSSA smoothing method is an e-fficient tool in interpreting geophysical potential field data which is particularly suitable in denoising, filtering and a-nalysing resistivity data singularities. The AOSSA smoothing and filtering approach was found to be consistently use-ful when applied to modeling surface phosphate “disturbances.”.
Bayesian Smoothing Algorithms in Partially Observed Markov Chains
Ait-el-Fquih, Boujemaa; Desbouvries, François
2006-11-01
Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.
Data assimilation using Bayesian filters and B-spline geological models
International Nuclear Information System (INIS)
This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.
Data assimilation using Bayesian filters and B-spline geological models
Duan, Lian
2011-04-01
This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.
Bayesian Penalized Spline Models for the Analysis of Spatio-Temporal Count Data
Bauer, Cici; Wakefield, Jon; Rue, Håvard; Self, Steve; Feng, Zijian; Wang, Yu
2016-01-01
In recent years, the availability of infectious disease counts in time and space has increased, and consequently there has been renewed interest in model formulation for such data. In this paper, we describe a model that was motivated by the need to analyze hand, foot and mouth disease (HFMD) surveillance data in China. The data are aggregated by geographical areas and by week, with the aims of the analysis being to gain insight into the space-time dynamics and to make short-term prediction to implement public health campaigns in those areas with a large predicted disease burden. The model we develop decomposes disease risk into marginal spatial and temporal components, and a space-time interaction piece. The latter is the crucial element, and we use a tensor product spline model with a Markov random field prior on the coefficients of the basis functions. The model can be formulated as a Gaussian Markov random field and so fast computation can be carried out using the integrated nested Laplace approximation (INLA) approach. A simulation study shows that the model can pick up complex space-time structure and our analysis of HFMD data in the central north region of China provides new insights into the dynamics of the disease. PMID:26530705
International Nuclear Information System (INIS)
With the development of an implantable radio transmitter system, direct measurement of cardiac autonomic nervous activities (CANAs) became possible for ambulatory animals for a couple of months. However, measured CANAs include not only CANA but also cardiac electric activity (CEA) that can affect the quantification of CANAs. In this study, we propose a novel CEA removal method using moving standard deviation and cubic smoothing spline. This method consisted of two steps of detecting CEA segments and eliminating CEAs in detected segments. Using implanted devices, we recorded stellate ganglion nerve activity (SGNA), vagal nerve activity (VNA) and superior left ganglionated plexi nerve activity (SLGPNA) directly from four ambulatory dogs. The CEA-removal performance of the proposed method was evaluated and compared with commonly used high-pass filtration (HPF) for various heart rates and CANA amplitudes. Results tested with simulated CEA and simulated true CANA revealed stable and excellent performance of the suggested method compared to the HPF method. The averaged relative error percentages of the proposed method were less than 0.67%, 0.65% and 1.76% for SGNA, VNA and SLGPNA, respectively. (paper)
A unified framework for spline estimators
Schwarz, Katsiaryna; Krivobokova, Tatyana
2012-01-01
This article develops a unified framework to study the asymptotic properties of all periodic spline-based estimators, that is, of regression, penalized and smoothing splines. The explicit form of the periodic Demmler-Reinsch basis in terms of exponential splines allows the derivation of an expression for the asymptotic equivalent kernel on the real line for all spline estimators simultaneously. The corresponding bandwidth, which drives the asymptotic behavior of spline estimators, is shown to...
Energy Technology Data Exchange (ETDEWEB)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
International Nuclear Information System (INIS)
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested
The impact of spatial scales and spatial smoothing on the outcome of bayesian spatial model.
Directory of Open Access Journals (Sweden)
Su Yun Kang
Full Text Available Discretization of a geographical region is quite common in spatial analysis. There have been few studies into the impact of different geographical scales on the outcome of spatial models for different spatial patterns. This study aims to investigate the impact of spatial scales and spatial smoothing on the outcomes of modelling spatial point-based data. Given a spatial point-based dataset (such as occurrence of a disease, we study the geographical variation of residual disease risk using regular grid cells. The individual disease risk is modelled using a logistic model with the inclusion of spatially unstructured and/or spatially structured random effects. Three spatial smoothness priors for the spatially structured component are employed in modelling, namely an intrinsic Gaussian Markov random field, a second-order random walk on a lattice, and a Gaussian field with Matérn correlation function. We investigate how changes in grid cell size affect model outcomes under different spatial structures and different smoothness priors for the spatial component. A realistic example (the Humberside data is analyzed and a simulation study is described. Bayesian computation is carried out using an integrated nested Laplace approximation. The results suggest that the performance and predictive capacity of the spatial models improve as the grid cell size decreases for certain spatial structures. It also appears that different spatial smoothness priors should be applied for different patterns of point data.
Directory of Open Access Journals (Sweden)
Wei Zeng
2015-04-01
Full Text Available Conventional splines offer powerful means for modeling surfaces and volumes in three-dimensional Euclidean space. A one-dimensional quaternion spline has been applied for animation purpose, where the splines are defined to model a one-dimensional submanifold in the three-dimensional Lie group. Given two surfaces, all of the diffeomorphisms between them form an infinite dimensional manifold, the so-called diffeomorphism space. In this work, we propose a novel scheme to model finite dimensional submanifolds in the diffeomorphism space by generalizing conventional splines. According to quasiconformal geometry theorem, each diffeomorphism determines a Beltrami differential on the source surface. Inversely, the diffeomorphism is determined by its Beltrami differential with normalization conditions. Therefore, the diffeomorphism space has one-to-one correspondence to the space of a special differential form. The convex combination of Beltrami differentials is still a Beltrami differential. Therefore, the conventional spline scheme can be generalized to the Beltrami differential space and, consequently, to the diffeomorphism space. Our experiments demonstrate the efficiency and efficacy of diffeomorphism splines. The diffeomorphism spline has many potential applications, such as surface registration, tracking and animation.
Comparing measures of model selection for penalized splines in Cox models
Malloy, Elizabeth J.; Spiegelman, Donna; Eisen, Ellen A
2009-01-01
This article presents an application and a simulation study of model fit criteria for selecting the optimal degree of smoothness for penalized splines in Cox models. The criteria considered were the Akaike information criterion, the corrected AIC, two formulations of the Bayesian information criterion, and a generalized cross-validation method. The estimated curves selected by the five methods were compared to each other in a study of rectal cancer mortality in autoworkers. In the stimulation...
Zhang, Zhimin; Tomlinson, John; Martin, Clyde
1994-01-01
In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.
Bivariate Interpolation by Splines and Approximation Order
Nürnberger, Günther
1996-01-01
We construct Hermite interpolation sets for bivariate spline spaces of arbitrary degree and smoothness one on non-rectangular domains with uniform type triangulations. This is done by applying a general method for constructing Lagrange interpolation sets for bivariate spline spaecs of arbitrary degree and smoothness. It is shown that Hermite interpolation yields (nearly) optimal approximation order. Applications to data fitting problems and numerical examples are given.
Schwarz and multilevel methods for quadratic spline collocation
Energy Technology Data Exchange (ETDEWEB)
Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)
1994-12-31
Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.
Vranish, John M. (Inventor)
1993-01-01
A split spline screw type payload fastener assembly, including three identical male and female type split spline sections, is discussed. The male spline sections are formed on the head of a male type spline driver. Each of the split male type spline sections has an outwardly projecting load baring segment including a convex upper surface which is adapted to engage a complementary concave surface of a female spline receptor in the form of a hollow bolt head. Additionally, the male spline section also includes a horizontal spline releasing segment and a spline tightening segment below each load bearing segment. The spline tightening segment consists of a vertical web of constant thickness. The web has at least one flat vertical wall surface which is designed to contact a generally flat vertically extending wall surface tab of the bolt head. Mutual interlocking and unlocking of the male and female splines results upon clockwise and counter clockwise turning of the driver element.
Number systems, α-splines and refinement
Zube, Severinas
2004-12-01
This paper is concerned with the smooth refinable function on a plane relative with complex scaling factor . Characteristic functions of certain self-affine tiles related to a given scaling factor are the simplest examples of such refinable function. We study the smooth refinable functions obtained by a convolution power of such charactericstic functions. Dahlke, Dahmen, and Latour obtained some explicit estimates for the smoothness of the resulting convolution products. In the case α=1+i, we prove better results. We introduce α-splines in two variables which are the linear combination of shifted basic functions. We derive basic properties of α-splines and proceed with a detailed presentation of refinement methods. We illustrate the application of α-splines to subdivision with several examples. It turns out that α-splines produce well-known subdivision algorithms which are based on box splines: Doo-Sabin, Catmull-Clark, Loop, Midedge and some -subdivision schemes with good continuity. The main geometric ingredient in the definition of α-splines is the fundamental domain (a fractal set or a self-affine tile). The properties of the fractal obtained in number theory are important and necessary in order to determine two basic properties of α-splines: partition of unity and the refinement equation.
Directory of Open Access Journals (Sweden)
Owens Chantelle J
2009-02-01
Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.
Bayesian Smoothing with Gaussian Processes Using Fourier Basis Functions in the spectralGP Package
Directory of Open Access Journals (Sweden)
Christopher J. Paciorek
2007-04-01
Full Text Available The spectral representation of stationary Gaussian processes via the Fourier basis provides a computationally efficient specification of spatial surfaces and nonparametric regression functions for use in various statistical models. I describe the representation in detail and introduce the spectralGP package in R for computations. Because of the large number of basis coefficients, some form of shrinkage is necessary; I focus on a natural Bayesian approach via a particular parameterized prior structure that approximates stationary Gaussian processes on a regular grid. I review several models from the literature for data that do not lie on a grid, suggest a simple model modification, and provide example code demonstrating MCMC sampling using the spectralGP package. I describe reasons that mixing can be slow in certain situations and provide some suggestions for MCMC techniques to improve mixing, also with example code, and some general recommendations grounded in experience.
Geostatistical analysis using K-splines in the geoadditive model
Vandendijck, Yannick; Faes, Christel; Hens, Niel
2015-01-01
In geostatistics, both kriging and smoothing splines are commonly used to predict a quantity of interest. The geoadditive model proposed by Kammann and Wand (2003) represents a fusion of kriging and penalized spline additive models. The fact that the underlying spatial covariance structure is poorly estimated using geoadditive models is a drawback. We describe K-splines, an extension of geoadditive models such that estimation of the underlying spatial process parameters and predictions ...
Image edges detection through B-Spline filters
International Nuclear Information System (INIS)
B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab
International Nuclear Information System (INIS)
We present, in this paper, a new unsupervised method for joint image super-resolution and separation between smooth and point sources. For this purpose, we propose a Bayesian approach with a Markovian model for the smooth part and Student’s t-distribution for point sources. All model and noise parameters are considered unknown and should be estimated jointly with images. However, joint estimators (joint MAP or posterior mean) are intractable and an approximation is needed. Therefore, a new gradient-like variational Bayesian method is applied to approximate the true posterior by a free-form separable distribution. A parametric form is obtained by approximating marginals but with form parameters that are mutually dependent. Their optimal values are achieved by iterating them till convergence. The method was tested by the model-generated data and a real dataset from the Herschel space observatory. (paper)
Directory of Open Access Journals (Sweden)
Hannu Olkkonen
2013-01-01
Full Text Available In this work we introduce a new family of splines termed as gamma splines for continuous signal approximation and multiresolution analysis. The gamma splines are born by -times convolution of the exponential by itself. We study the properties of the discrete gamma splines in signal interpolation and approximation. We prove that the gamma splines obey the two-scale equation based on the polyphase decomposition. to introduce the shift invariant gamma spline wavelet transform for tree structured subscale analysis of asymmetric signal waveforms and for systems with asymmetric impulse response. Especially we consider the applications in biomedical signal analysis (EEG, ECG, and EMG. Finally, we discuss the suitability of the gamma spline signal processing in embedded VLSI environment.
SOME RECURRENCE FORMULAS FOR BOX SPLINES AND CONE SPLINES
Institute of Scientific and Technical Information of China (English)
Patrick J. Van Fleet
2004-01-01
A degree elevation formula for multivariate simplex splines was given by Micchelli[6] and extended to hold for multivariate Dirichlet splines in [8]. We report similar formulae for multivariate cone splines and box splines. To this end, we utilize a relation due to Dahmen and Micchelli[4] that connects box splines and cone splines and a degree reduction formula given by Cohen, Lyche, and Riesenfeld in [2].
SOME RECURRENCE FORMULAS FOR BOX SPLINES AND CONE SPLINES
Institute of Scientific and Technical Information of China (English)
Patrick J. Van Fleet
2002-01-01
A degree elevation formula for multivariate simplex splines was given by Micchelli [6] and extended to hold for multivariate Dirichlet splines in [8]. We report similar formulae for multivariate cone splines and box_splines. To this end, we utilize a relation due to Dahmen and Micchelli [4] that connects box splines and cone splines and a degree reduction formula given by Cohen, Lyche, and Riesenfeld in [2].
LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA
Directory of Open Access Journals (Sweden)
T. Dokken
2015-08-01
Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.
Color management with a hammer: the B-spline fitter
Bell, Ian E.; Liu, Bonny H. P.
2003-01-01
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
Institute of Scientific and Technical Information of China (English)
2009-01-01
In this paper, we study the local asymptotic behavior of the regression spline estimator in the framework of marginal semiparametric model. Similarly to Zhu, Fung and He (2008), we give explicit expression for the asymptotic bias of regression spline estimator for nonparametric function f. Our results also show that the asymptotic bias of the regression spline estimator does not depend on the working covariance matrix, which distinguishes the regression splines from the smoothing splines and the seemingly unrelated kernel. To understand the local bias result of the regression spline estimator, we show that the regression spline estimator can be obtained iteratively by applying the standard weighted least squares regression spline estimator to pseudo-observations. At each iteration, the bias of the estimator is unchanged and only the variance is updated.
Algebraic Geometry for Splines
2012-01-01
List of papers. Papers 1 - 4 are removed from the thesis due to publisher restrictions. These papers are chapters 2 - 5 in the thesis. Paper 1 / Chapter 2: Bernard Mourrain, Nelly Villamizar. Homological techniques for the analysis of the dimension of triangular spline spaces. Journal of Symbolic Computation. Volume 50, March 2013, Pages 564–577. doi:10.1016/j.jsc.2012.10.002 Paper 2 / Chapter 3: Bernard Mourrain, Nelly Villamizar. On the dimension of splines on tetrahedral decomposit...
Multivariate Spline Algorithms for CAGD
Boehm, W.
1985-01-01
Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.
Optimal designs for multivariable spline models
Biedermann, Stefanie; Dette, Holger; Woods, David C.
2009-01-01
In this paper, we investigate optimal designs for multivariate additive spline regression models. We assume that the knot locations are unknown, so must be estimated from the data. In this situation, the Fisher information for the full parameter vector depends on the unknown knot locations, resulting in a non-linear design problem. We show that locally, Bayesian and maximin D-optimal designs can be found as the products of the optimal designs in one dimension. A similar result is proven for Q...
Clinical Trials: Spline Modeling is Wonderful for Nonlinear Effects.
Cleophas, Ton J
2016-01-01
Traditionally, nonlinear relationships like the smooth shapes of airplanes, boats, and motor cars were constructed from scale models using stretched thin wooden strips, otherwise called splines. In the past decades, mechanical spline methods have been replaced with their mathematical counterparts. The objective of the study was to study whether spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial and also to study whether it can detect patterns in a trial that are relevant but go unobserved with simpler regression models. A clinical trial assessing the effect of quantity of care on quality of care was used as an example. Spline curves consistent of 4 or 5 cubic functions were applied. SPSS statistical software was used for analysis. The spline curves of our data outperformed the traditional curves because (1) unlike the traditional curves, they did not miss the top quality of care given in either subgroup, (2) unlike the traditional curves, they, rightly, did not produce sinusoidal patterns, and (3) unlike the traditional curves, they provided a virtually 100% match of the original values. We conclude that (1) spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial; (2) spline modeling can detect patterns in a trial that are relevant but may go unobserved with simpler regression models; (3) in clinical research, spline modeling has great potential given the presence of many nonlinear effects in this field of research and given its sophisticated mathematical refinement to fit any nonlinear effect in the mostly accurate way; and (4) spline modeling should enable to improve making predictions from clinical research for the benefit of health decisions and health care. We hope that this brief introduction to spline modeling will stimulate clinical investigators to start using this wonderful method. PMID:23689089
THE INSTABILITY DEGREE IN THE DIEMNSION OF SPACES OF BIVARIATE SPLINE
Institute of Scientific and Technical Information of China (English)
Zhiqiang Xu; Renhong Wang
2002-01-01
In this paper, the dimension of the spaces of bivariate spline with degree less that 2r and smoothness order r on the Morgan-Scott triangulation is considered. The concept of the instability degree in the dimension of spaces of bivariate spline is presented. The results in the paper make us conjecture the instability degree in the dimension of spaces of bivariate spline is infinity.
How to fly an aircraft with control theory and splines
Karlsson, Anders
1994-01-01
When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.
Adaptive Parametrization of Multivariate B-splines for Image Registration
DEFF Research Database (Denmark)
Hansen, Michael Sass; Glocker, Benjamin; Navab, Nassir;
2008-01-01
We present an adaptive parametrization scheme for dynamic mesh refinement in the application of parametric image registration. The scheme is based on a refinement measure ensuring that the control points give an efficient representation of the warp fields, in terms of minimizing the registration...... cost function. In the current work we introduce multivariate B-splines as a novel alternative to the widely used tensor B-splines enabling us to make efficient use of the derived measure.The multivariate B-splines of order n are Cn- 1 smooth and are based on Delaunay configurations of arbitrary 2D or 3...... reside on a regular grid. In contrast, by efficient non- constrained placement of the knots, the multivariate B- splines are shown to give a good representation of inho- mogeneous objects in natural settings. The wide applicability of the method is illustrated through its application on medical data and...
International Nuclear Information System (INIS)
1 - Description of program or function: The three programs SPLPKG, WFCMPR, and WFAPPX provide the capability for interactively generating, comparing and approximating Wilson-Fowler Splines. The Wilson-Fowler spline is widely used in Computer Aided Design and Manufacturing (CAD/CAM) systems. It is favored for many applications because it produces a smooth, low curvature fit to planar data points. Program SPLPKG generates a Wilson-Fowler spline passing through given nodes (with given end conditions) and also generates a piecewise linear approximation to that spline within a user-defined tolerance. The program may be used to generate a 'desired' spline against which to compare other Splines generated by CAD/CAM systems. It may also be used to generate an acceptable approximation to a desired spline in the event that an acceptable spline cannot be generated by the receiving CAD/CAM system. SPLPKG writes an IGES file of points evaluated on the spline and/or a file containing the spline description. Program WFCMPR computes the maximum difference between two Wilson-Fowler Splines and may be used to verify the spline recomputed by a receiving system. It compares two Wilson-Fowler Splines with common nodes and reports the maximum distance between curves (measured perpendicular to segments) and the maximum difference of their tangents (or normals), both computed along the entire length of the Splines. Program WFAPPX computes the maximum difference between a Wilson- Fowler spline and a piecewise linear curve. It may be used to accept or reject a proposed approximation to a desired Wilson-Fowler spline, even if the origin of the approximation is unknown. The maximum deviation between these two curves, and the parameter value on the spline where it occurs are reported. 2 - Restrictions on the complexity of the problem - Maxima of: 1600 evaluation points (SPLPKG), 1000 evaluation points (WFAPPX), 1000 linear curve breakpoints (WFAPPX), 100 spline Nodes
International Nuclear Information System (INIS)
To study the impact of the Deepwater Horizon oil spill on photosynthesis of coastal salt marsh plants in Mississippi, we developed a hierarchical Bayesian (HB) model based on field measurements collected from July 2010 to November 2011. We sampled three locations in Davis Bayou, Mississippi (30.375°N, 88.790°W) representative of a range of oil spill impacts. Measured photosynthesis was negative (respiration only) at the heavily oiled location in July 2010 only, and rates started to increase by August 2010. Photosynthesis at the medium oiling location was lower than at the control location in July 2010 and it continued to decrease in September 2010. During winter 2010–2011, the contrast between the control and the two impacted locations was not as obvious as in the growing season of 2010. Photosynthesis increased through spring 2011 at the three locations and decreased starting with October at the control location and a month earlier (September) at the impacted locations. Using the field data, we developed an HB model. The model simulations agreed well with the measured photosynthesis, capturing most of the variability of the measured data. On the basis of the posteriors of the parameters, we found that air temperature and photosynthetic active radiation positively influenced photosynthesis whereas the leaf stress level negatively affected photosynthesis. The photosynthesis rates at the heavily impacted location had recovered to the status of the control location about 140 days after the initial impact, while the impact at the medium impact location was never severe enough to make photosynthesis significantly lower than that at the control location over the study period. The uncertainty in modeling photosynthesis rates mainly came from the individual and micro-site scales, and to a lesser extent from the leaf scale. (letter)
Xuqiong Luo; Qikui Du
2013-01-01
A local Lagrange interpolation scheme using bivariate C2 splines of degree seven over a checkerboard triangulated quadrangulation is constructed. The method provides optimal order approximation of smooth functions.
Spline screw payload fastening system
Vranish, John M. (Inventor)
1993-01-01
A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the
Spline screw payload fastening system
Vranish, John M.
1993-09-01
A system for coupling an orbital replacement unit (ORU) to a space station structure via the actions of a robot and/or astronaut is described. This system provides mechanical and electrical connections both between the ORU and the space station structure and between the ORU and the ORU and the robot/astronaut hand tool. Alignment and timing features ensure safe, sure handling and precision coupling. This includes a first female type spline connector selectively located on the space station structure, a male type spline connector positioned on the orbital replacement unit so as to mate with and connect to the first female type spline connector, and a second female type spline connector located on the orbital replacement unit. A compliant drive rod interconnects the second female type spline connector and the male type spline connector. A robotic special end effector is used for mating with and driving the second female type spline connector. Also included are alignment tabs exteriorally located on the orbital replacement unit for berthing with the space station structure. The first and second female type spline connectors each include a threaded bolt member having a captured nut member located thereon which can translate up and down the bolt but are constrained from rotation thereabout, the nut member having a mounting surface with at least one first type electrical connector located on the mounting surface for translating with the nut member. At least one complementary second type electrical connector on the orbital replacement unit mates with at least one first type electrical connector on the mounting surface of the nut member. When the driver on the robotic end effector mates with the second female type spline connector and rotates, the male type spline connector and the first female type spline connector lock together, the driver and the second female type spline connector lock together, and the nut members translate up the threaded bolt members carrying the
Pi, Archimedes and circular splines
Sablonnière, Paul
2013-01-01
In the present paper, we give approximate values of $\\pi$ deduced from the areas of inscribed and circumscribed quadratic and cubic circular splines. Similar results on circular splines of higher degrees and higher approximation orders can be obtained in the same way. We compare these values to those obtained by computing the {\\em perimeters} of those circular splines. We observe that the former are much easier to compute than the latter and give results of the same order. It also appears tha...
Generalized fairing algorithm of parametric cubic splines
Institute of Scientific and Technical Information of China (English)
WANG Yuan-jun; CAO Yuan
2006-01-01
Kjellander has reported an algorithm for fairing uniform parametric cubic splines. Poliakoff extended Kjellander's algorithm to non-uniform case. However, they merely changed the bad point's position, and neglected the smoothing of tangent at bad point. In this paper, we present a fairing algorithm that both changed point's position and its corresponding tangent vector. The new algorithm possesses the minimum property of energy. We also proved Poliakoff's fairing algorithm is a deduction of our fairing algorithm. Several fairing examples are given in this paper.
Spline techniques for magnetic fields
International Nuclear Information System (INIS)
This report is an overview of B-spline techniques, oriented toward magnetic field computation. These techniques form a powerful mathematical approximating method for many physics and engineering calculations. In section 1, the concept of a polynomial spline is introduced. Section 2 shows how a particular spline with well chosen properties, the B-spline, can be used to build any spline. In section 3, the description of how to solve a simple spline approximation problem is completed, and some practical examples of using splines are shown. All these sections deal exclusively in scalar functions of one variable for simplicity. Section 4 is partly digression. Techniques that are not B-spline techniques, but are closely related, are covered. These methods are not needed for what follows, until the last section on errors. Sections 5, 6, and 7 form a second group which work toward the final goal of using B-splines to approximate a magnetic field. Section 5 demonstrates how to approximate a scalar function of many variables. The necessary mathematics is completed in section 6, where the problems of approximating a vector function in general, and a magnetic field in particular, are examined. Finally some algorithms and data organization are shown in section 7. Section 8 deals with error analysis
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol; Erleben, Kenny
dimensional 2D/3D deformable model. Our activation splines are easy to set up and can be used for physics based animation of deformable models such as snake motion and locomotion of characters. Our approach generalises easily to both 2D and 3D simulations and is applicable in physics based games or animations......We present a method for simulating the active contraction of deformable models, usable for interactive animation of soft deformable objects. We present a novel physical principle as the governing equation for the coupling between the low dimensional 1D activation force model and the higher...... due to its simplicity and very low computational cost....
On Characterization of Quadratic Splines
DEFF Research Database (Denmark)
Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong
2005-01-01
representation can be refined in a neighborhood of a non-degenerate point and a set of non-degenerate minimizers. Based on these characterizations, many existing algorithms for specific convex quadratic splines are also finite convergent for a general convex quadratic spline. Finally, we study the relationship...
Straight-sided Spline Optimization
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2011-01-01
and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using design modifications, that do not change the spline load carrying capacity, it is shown that...
Mathematical research on spline functions
Horner, J. M.
1973-01-01
One approach in spline functions is to grossly estimate the integrand in J and exactly solve the resulting problem. If the integrand in J is approximated by Y" squared, the resulting problem lends itself to exact solution, the familiar cubic spline. Another approach is to investigate various approximations to the integrand in J and attempt to solve the resulting problems. The results are described.
Control theoretic splines optimal control, statistical, and path planning
Egerstedt, Magnus
2010-01-01
Splines, both interpolatory and smoothing, have a long and rich history that has largely been application driven. This book unifies these constructions in a comprehensive and accessible way, drawing from the latest methods and applications to show how they arise naturally in the theory of linear control systems. Magnus Egerstedt and Clyde Martin are leading innovators in the use of control theoretic splines to bring together many diverse applications within a common framework. In this book, they begin with a series of problems ranging from path planning to statistics to approximation.
Piecewise linear regression splines with hyperbolic covariates
International Nuclear Information System (INIS)
Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)
Vranish, John M.
1993-06-01
A captured nut member is located within a tool interface assembly and being actuated by a spline screw member driven by a robot end effector. The nut member lowers and rises depending upon the directional rotation of the coupling assembly. The captured nut member further includes two winged segments which project outwardly in diametrically opposite directions so as to engage and disengage a clamping surface in the form of a chamfered notch respectively provided on the upper surface of a pair of parallel forwardly extending arm members of a bifurcated tool stowage holster which is adapted to hold and store a robotic tool including its end effector interface when not in use. A forward and backward motion of the robot end effector operates to insert and remove the tool from the holster.
Free-Form Deformation with Rational DMS-Spline Volumes
Institute of Scientific and Technical Information of China (English)
Gang Xu; Guo-Zhao Wang; Xiao-Diao Chen
2008-01-01
In this paper, we propose a novel free-form deformation (FFD) technique, RDMS-FFD (Rational DMS-FFD),based on rational DMS-spline volumes. RDMS-FFD inherits some good properties of rational DMS-spline volumes and combines more deformation techniques than previous FFD methods in a consistent framework, such as local deformation,control lattice of arbitrary topology, smooth deformation, multiresolution deformation and direct manipulation of deforma-tion. We first introduce the rational DMS-spline volume by directly generalizing the previous results related to DMS-splies.How to generate a tetrahedral domain that approximates the shape of the object to be deformed is also introduced in this paper. Unlike the traditional FFD techniques, we manipulate the vertices of the tetrahedral domain to achieve deformation results. Our system demonstrates that RDMS-FFD is powerful and intuitive in geometric modeling.
Penalized Spline: a General Robust Trajectory Model for ZIYUAN-3 Satellite
Pan, H.; Zou, Z.
2016-06-01
Owing to the dynamic imaging system, the trajectory model plays a very important role in the geometric processing of high resolution satellite imagery. However, establishing a trajectory model is difficult when only discrete and noisy data are available. In this manuscript, we proposed a general robust trajectory model, the penalized spline model, which could fit trajectory data well and smooth noise. The penalized parameter λ controlling the smooth and fitting accuracy could be estimated by generalized cross-validation. Five other trajectory models, including third-order polynomials, Chebyshev polynomials, linear interpolation, Lagrange interpolation and cubic spline, are compared with the penalized spline model. Both the sophisticated ephemeris and on-board ephemeris are used to compare the orbit models. The penalized spline model could smooth part of noise, and accuracy would decrease as the orbit length increases. The band-to-band misregistration of ZiYuan-3 Dengfeng and Faizabad multispectral images is used to evaluate the proposed method. With the Dengfeng dataset, the third-order polynomials and Chebyshev approximation could not model the oscillation, and introduce misregistration of 0.57 pixels misregistration in across-track direction and 0.33 pixels in along-track direction. With the Faizabad dataset, the linear interpolation, Lagrange interpolation and cubic spline model suffer from noise, introducing larger misregistration than the approximation models. Experimental results suggest the penalized spline model could model the oscillation and smooth noise.
Splines, contours and SVD subroutines
International Nuclear Information System (INIS)
Portability of Fortran code is a major concern these days, since hardware and commercial software change faster than the codes themselves. Hence, using public domain, portable, mathematical subroutines is imperative. Here we present a collection of subroutines we have used in the past, and found to be particularly useful. They are: 2-dimensional splines, contour tracing of flux surface (based on 2-D spline), and singular Value Matrix Decomposition (for Chi-square minimization)
A generalised linear and nonlinear spline filter
Zeng, W.; Jiang, X.; Scott, P.
2011-01-01
In this paper, a generalised spline filter, that has a unified description for both the linear spline filter and the nonlinear robust spline filter, is proposed. Based on the M-estimation theory, the general spline filter model can be solved by using an Iterated Reweighted Least Squared method which is also general for both the linear and nonlinear spline filter. The algorithm has been verified to be effective, efficient and fast.
A Large Sample Study of the Bayesian Bootstrap
Lo, Albert Y.
1987-01-01
An asymptotic justification of the Bayesian bootstrap is given. Large-sample Bayesian bootstrap probability intervals for the mean, the variance and bands for the distribution, the smoothed density and smoothed rate function are also provided.
RECONSTRUCTION OF LAYER DATA WITH DEFORMABLE B-SPLINES
Institute of Scientific and Technical Information of China (English)
Cheng Siyuan; Zhang Xiangwei; Xiong Hanwei
2005-01-01
A new B-spline surface reconstruction method from layer data based on deformable model is presented. An initial deformable surface, which is represented as a closed cylinder, is firstly given. The surface is subject to internal forces describing its implicit smoothness property and external forces attracting it toward the layer data points. And then finite element method is adopted to solve its energy minimization problem, which results a bicubic closed B-spline surface with C2 continuity. The proposed method can provide a smoothness and accurate surface model directly from the layer data, without the need to fit cross-sectional curves and make them compatible. The feasibility of the proposed method is verified by the experimental results.
Isogeometric analysis using T-splines
Bazilevs, Yuri
2010-01-01
We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.
Symmetric, discrete fractional splines and Gabor systems
DEFF Research Database (Denmark)
Søndergaard, Peter Lempel
2006-01-01
In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the...
Adaptive B-spline volume representation of measured BRDF data for photorealistic rendering
Directory of Open Access Journals (Sweden)
Hyungjun Park
2015-01-01
Full Text Available Measured bidirectional reflectance distribution function (BRDF data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volume representation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of multiple B-spline curve fitting along u-, v-, and w-parametric directions. Especially, it makes the efficient use of knots in the multiple B-spline curve fitting and thereby accomplishes adaptive knot placement along each parametric direction of a resulting B-spline volume. The proposed method is quite useful to realize efficient data reduction while smoothing out the noises and keeping the overall features of BRDF data well. By applying the B-spline volume models of real materials for rendering, we show that the B-spline volume models are effective in preserving the features of material appearance and are suitable for representing BRDF data.
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
Data approximation using a blending type spline construction
International Nuclear Information System (INIS)
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by Ck-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences
Data approximation using a blending type spline construction
Energy Technology Data Exchange (ETDEWEB)
Dalmo, Rune; Bratlie, Jostein [Narvik University College, P.O. Box 385, N-8505 Narvik (Norway)
2014-11-18
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.
Spline and spline wavelet methods with applications to signal and image processing
Averbuch, Amir Z; Zheludev, Valery A
This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...
Spline screw multiple rotations mechanism
Vranish, John M.
1993-12-01
A system for coupling two bodies together and for transmitting torque from one body to another with mechanical timing and sequencing is reported. The mechanical timing and sequencing is handled so that the following criteria are met: (1) the bodies are handled in a safe manner and nothing floats loose in space, (2) electrical connectors are engaged as long as possible so that the internal processes can be monitored throughout by sensors, and (3) electrical and mechanical power and signals are coupled. The first body has a splined driver for providing the input torque. The second body has a threaded drive member capable of rotation and limited translation. The embedded drive member will mate with and fasten to the splined driver. The second body has an embedded bevel gear member capable of rotation and limited translation. This bevel gear member is coaxial with the threaded drive member. A compression spring provides a preload on the rotating threaded member, and a thrust bearing is used for limiting the translation of the bevel gear member so that when the bevel gear member reaches the upward limit of its translation the two bodies are fully coupled and the bevel gear member then rotates due to the input torque transmitted from the splined driver through the threaded drive member to the bevel gear member. An output bevel gear with an attached output drive shaft is embedded in the second body and meshes with the threaded rotating bevel gear member to transmit the input torque to the output drive shaft.
Splines smoothing assisted least-squares identification of robotic manipulators
Czech Academy of Sciences Publication Activity Database
Dolinský, Kamil; Čelikovský, Sergej
Cancún, Quintana Roo, México: AMCA, 2014, s. 702-707. [Memorias del XVI Congreso Latinoamericano de Control Automático, CLCA 2014 Cancún, Quintana Roo, México. Cancún, Quintana Roo (MX), 14.10.2014-17.10.2014] R&D Projects: GA ČR(CZ) GAP103/12/1794 Institutional support: RVO:67985556 Keywords : Parameter identifcation * Mechanical systems * Robot ic manipulators Subject RIV: BC - Control Systems Theory
Comparison of CSC method and the B-net method for deducing smoothness condition
Institute of Scientific and Technical Information of China (English)
Renhong Wang; Kai Qu
2009-01-01
The first author of this paper established an approach to study the multivariate spline over arbitrary partition,and presented the so-called conformality method of smoothing cofactor (the CSC method).Farin introduced the B-net method which is suitable for studying the multivariate spline over simplex partitions.This paper indicates that the smoothness conditions obtained in terms of the B-net method can be derived by the CSC method for the spline spaces over simplex partitions,and the CSC method is more capable in some sense than the B-net method in studying the multivariate spline.
2-rational Cubic Spline Involving Tension Parameters
Indian Academy of Sciences (India)
M Shrivastava; J Joseph
2000-08-01
In the present paper, 1-piecewise rational cubic spline function involving tension parameters is considered which produces a monotonic interpolant to a given monotonic data set. It is observed that under certain conditions the interpolant preserves the convexity property of the data set. The existence and uniqueness of a 2-rational cubic spline interpolant are established. The error analysis of the spline interpolant is also given.
B-spline techniques for volatility modeling
Corlay, Sylvain
2013-01-01
This paper is devoted to the application of B-splines to volatility modeling, specifically the calibration of the leverage function in stochastic local volatility models and the parameterization of an arbitrage-free implied volatility surface calibrated to sparse option data. We use an extension of classical B-splines obtained by including basis functions with infinite support. We first come back to the application of shape-constrained B-splines to the estimation of conditional expectations, ...
Quadrotor system identification using the multivariate multiplex b-spline
Visser, T.; De Visser, C.C.; Van Kampen, E.J.
2015-01-01
A novel method for aircraft system identification is presented that is based on a new multivariate spline type; the multivariate multiplex B-spline. The multivariate multiplex B-spline is a generalization of the recently introduced tensor-simplex B-spline. Multivariate multiplex splines obtain simil
Construction of local integro quintic splines
Directory of Open Access Journals (Sweden)
T. Zhanlav
2016-06-01
Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.
Inference in dynamic systems using B-splines and quasilinearized ODE penalties.
Frasso, Gianluca; Jaeger, Jonathan; Lambert, Philippe
2016-05-01
Nonlinear (systems of) ordinary differential equations (ODEs) are common tools in the analysis of complex one-dimensional dynamic systems. We propose a smoothing approach regularized by a quasilinearized ODE-based penalty. Within the quasilinearized spline-based framework, the estimation reduces to a conditionally linear problem for the optimization of the spline coefficients. Furthermore, standard ODE compliance parameter(s) selection criteria are applicable. We evaluate the performances of the proposed strategy through simulated and real data examples. Simulation studies suggest that the proposed procedure ensures more accurate estimates than standard nonlinear least squares approaches when the state (initial and/or boundary) conditions are not known. PMID:26602190
Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models
Roshani, Daem; Ghaderi, Ebrahim
2016-01-01
Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809
Fuzzy B-Spline Surface Modeling
Directory of Open Access Journals (Sweden)
Rozaimi Zakaria
2014-01-01
Full Text Available This paper discusses the construction of a fuzzy B-spline surface model. The construction of this model is based on fuzzy set theory which is based on fuzzy number and fuzzy relation concepts. The proposed theories and concepts define the uncertainty data sets which represent fuzzy data/control points allowing the uncertainties data points modeling which can be visualized and analyzed. The fuzzification and defuzzification processes were also defined in detail in order to obtain the fuzzy B-spline surface crisp model. Final section shows an application of fuzzy B-spline surface modeling for terrain modeling which shows its usability in handling uncertain data.
Ryu, Duchwan
2010-09-28
We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.
Bayesian modeling using WinBUGS
Ntzoufras, Ioannis
2009-01-01
A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...
Fretting damage parameters in splined couplings
Cuffaro, Vincenzo; Mura, Andrea; Cura', Francesca Maria
2013-01-01
This work focuses on the analysis of the debris found in the lubrication oil produced by the wear abrasion during wear tests conducted on crowned splined couplings. During each test the presence and the dimensions of the debris in the oil have been monitored. Tests have been performed by means of a dedicated splined couplings test rig and they have been performed by imposing an angular misalignment on the axes of the components. Results shows that when these components work in misaligned cond...
P-Splines Using Derivative Information
Calderon, Christopher P.
2010-01-01
Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.
MATLAB programs for smoothing X-ray spectra
International Nuclear Information System (INIS)
Two MATLAB 4.0 programs for smoothing X-ray spectra: wekskl.m - using polynomial regression splines and wekfft.m - using the fast Fourier transform are presented. The wekskl.m accomplishes smoothing for optimal distances between the knots, whereas the wekff.m uses an optimal spectral window width. The smoothed spectra are available in the form of vectors and are presented in a graphical form as well. (author)
Timo Teuber
2013-01-01
The two-dimensional circular structure model by Kauermann, Teuber, and Flaschel (2011) will be extended to estimate more than two time series simultaneously. It will be assumed that the multivariate time series follow a cycle over the time. However, the radius and the angle are allowed to smoothly change over the time and will be estimated using a Penalized Spline Regression Technique. The model will be put to life using the Leading, Coincident and Lagging Indicators provided by the Conferenc...
Real-Time Spline Trajectory Creation and Optimization for Mobile Robots
Koceski, Saso; Koceska, Natasa; Zobel, Pierluigi Beomonte; Durante, Francesco
2009-01-01
In the field of mobile robotics, calculating suitable paths, for point to point navigation, is computationally difficult. Maneuvering the vehicle safely around obstacles is essential, and the ability to generate safe paths in a real time environment is crucial for vehicle viability. A method for developing feasible paths through complicated environments using a baseline smooth path based on Hermite cubic splines is presented. A method able to iteratively optimize the path is also ...
Knot Insertion Algorithms for ECT B-spline Curves
Institute of Scientific and Technical Information of China (English)
SONG Huan-huan; TANG Yue-hong; LI Yu-juan
2013-01-01
Knot insertion algorithm is one of the most important technologies of B-spline method. By inserting a knot the local prop-erties of B-spline curve and the control flexibility of its shape can be further improved, also the segmentation of the curve can be re-alized. ECT spline curve is drew by the multi-knots spline curve with associated matrix in ECT spline space;Muehlbach G and Tang Y and many others have deduced the existence and uniqueness of the ECT spline function and developed many of its important properties .This paper mainly focuses on the knot insertion algorithm of ECT B-spline curve.It is the widest popularization of B-spline Behm algorithm and theory. Inspired by the Behm algorithm, in the ECT spline space, structure of generalized Pólya poly-nomials and generalized de Boor Fix dual functional, expressing new control points which are inserted after the knot by linear com-bination of original control vertex the single knot, and there are two cases, one is the single knot, the other is the double knot. Then finally comes the insertion algorithm of ECT spline curve knot. By application of the knot insertion algorithm, this paper also gives out the knot insertion algorithm of four order geometric continuous piecewise polynomial B-spline and algebraic trigonometric spline B-spline, which is consistent with previous results.
Determinants of Low Birth Weight in Malawi: Bayesian Geo-Additive Modelling.
Directory of Open Access Journals (Sweden)
Alfred Ngwira
Full Text Available Studies on factors of low birth weight in Malawi have neglected the flexible approach of using smooth functions for some covariates in models. Such flexible approach reveals detailed relationship of covariates with the response. The study aimed at investigating risk factors of low birth weight in Malawi by assuming a flexible approach for continuous covariates and geographical random effect. A Bayesian geo-additive model for birth weight in kilograms and size of the child at birth (less than average or average and higher with district as a spatial effect using the 2010 Malawi demographic and health survey data was adopted. A Gaussian model for birth weight in kilograms and a binary logistic model for the binary outcome (size of child at birth were fitted. Continuous covariates were modelled by the penalized (p splines and spatial effects were smoothed by the two dimensional p-spline. The study found that child birth order, mother weight and height are significant predictors of birth weight. Secondary education for mother, birth order categories 2-3 and 4-5, wealth index of richer family and mother height were significant predictors of child size at birth. The area associated with low birth weight was Chitipa and areas with increased risk to less than average size at birth were Chitipa and Mchinji. The study found support for the flexible modelling of some covariates that clearly have nonlinear influences. Nevertheless there is no strong support for inclusion of geographical spatial analysis. The spatial patterns though point to the influence of omitted variables with some spatial structure or possibly epidemiological processes that account for this spatial structure and the maps generated could be used for targeting development efforts at a glance.
Splines and compartment models an introduction
Biebler, Karl-Ernst
2013-01-01
This book presents methods of mathematical modeling from two points of view. Splines provide a general approach while compartment models serve as examples for context related to modeling. The preconditions and characteristics of the developed mathematical models as well as the conditions surrounding data collection and model fit are taken into account. The substantial statements of this book are mathematically proven. The results are ready for application with examples and related program codes given. In this book, splines are algebraically developed such that the reader or user can easily u
International Nuclear Information System (INIS)
A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background
A spline-regularized minimal residual algorithm for iterative attenuation correction in SPECT
International Nuclear Information System (INIS)
In SPECT, regularization is necessary to avoid divergence of the iterative algorithms used for non-uniform attenuation compensation. In this paper, we propose a spline-based regularization method for the minimal residual algorithm. First, the acquisition noise is filtered using a statistical model involving spline smoothing so that the filtered projections belong to a Sobolev space with specific continuity and derivability properties. Then, during the iterative reconstruction procedure, the continuity of the inverse Radon transform between Sobolev spaces is used to design a spline-regularized filtered backprojection method, by which the known regularity properties of the projections determine those of the corresponding reconstructed slices. This ensures that the activity distributions estimated at each iteration present regularity properties, which avoids computational noise amplification, thus stabilizing the iterative process. Analytical and Monte Carlo simulations are used to show that the proposed spline-regularized minimal residual algorithm converges to a satisfactory stable solution in terms of restored activity and homogeneity, using at most 25 iterations, whereas the non regularized version of the algorithm diverges. Choosing the number of iterations is therefore no longer a critical issue for this reconstruction procedure. (author)
REAL ROOT ISOLATION OF SPLINE FUNCTIONS
Institute of Scientific and Technical Information of China (English)
Renhong Wang; Jinming Wu
2008-01-01
In this paper,we propose an algorithm for isolating real roots of a given univariate spline function,which is based on the use of Descartes' rule of signs and de Casteljau algorithm.Numerical examples illustrate the flexibility and effectiveness of the algorithm.
FORMATION OF SHAFT SPLINES USING ROLLING METHOD
Directory of Open Access Journals (Sweden)
M. Sidorenko
2014-10-01
Full Text Available The paper describes design of rolling heads used for cold rolling of straight-sided splines on shafts and presents theoretical principles of this process. These principles make it possible to calculate an effort which is required for pushing billet through rolling-on rolls with due account of metal hardening during deformation.
Lesaffre, Emmanuel
2012-01-01
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
Draper, D.
2001-01-01
© 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography
A Geometric Approach for Multi-Degree Spline
Institute of Scientific and Technical Information of China (English)
Xin Li; Zhang-Jin Huang; Zhao Liu
2012-01-01
Multi-degree spline (MD-spline for short) is a generalization of B-spline which comprises of polynomial segments of various degrees.The present paper provides a new definition for MD-spline curves in a geometric intuitive way based on an efficient and simple evaluation algorithm.MD-spline curves maintain various desirable properties of B-spline curves,such as convex hull,local support and variation diminishing properties.They can also be refined exactly with knot insertion.The continuity between two adjacent segments with different degrees is at least C1 and that between two adjacent segments of same degrees d is Cd-1.Benefited by the exact refinement algorithm,we also provide several operators for MD-spline curves,such as converting each curve segment into Bézier form,an efficient merging algorithm and a new curve subdivision scheme which allows different degrees for each segment.
Geometry Modeling of Ship Hull Based on Non-uniform B-spline
Institute of Scientific and Technical Information of China (English)
WANG Hu; ZOU Zao-jian
2008-01-01
In order to generate the three-dimensional (3-D) hull surface accurately and smoothly, a mixed method which is made up of non-uniform B-spline together with an iterative procedure was developed. By using the iterative method the data points on each section curve are calculated and the generalized waterlines and transverse section curves are determined. Then using the non-uniform B-spline expression, the control vertex net of the hull is calculated based on the generalized waterlines and section curves. A ship with tunnel stern was taken as test case. The numerical results prove that the proposed approach for geometry modeling of 3-D ship hull surface is accurate and effective.
B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation
Directory of Open Access Journals (Sweden)
Frederic Precioso
2002-06-01
Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.
Left ventricular motion reconstruction with a prolate spheroidal B-spline model
International Nuclear Information System (INIS)
Tagged cardiac magnetic resonance (MR) imaging can non-invasively image deformation of the left ventricular (LV) wall. Three-dimensional (3D) analysis of tag data requires fitting a deformation model to tag lines in the image data. In this paper, we present a 3D myocardial displacement and strain reconstruction method based on a B-spline deformation model defined in prolate spheroidal coordinates, which more closely matches the shape of the LV wall than existing Cartesian or cylindrical coordinate models. The prolate spheroidal B-spline (PSB) deformation model also enforces smoothness across and can compute strain at the apex. The PSB reconstruction algorithm was evaluated on a previously published data set to allow head-to-head comparison of the PSB model with existing LV deformation reconstruction methods. We conclude that the PSB method can accurately reconstruct deformation and strain in the LV wall from tagged MR images and has several advantages relative to existing techniques
B-spline parameterization of spatial response in a monolithic scintillation camera
Solovov, V; Chepel, V; Domingos, V; Martins, R
2016-01-01
A framework for parameterization of the light response functions (LRFs) in a scintillation camera was developed. It is based on approximation of the measured or simulated photosensor response with weighted sums of uniform cubic B-splines or their tensor products. The LRFs represented in this way are smooth, computationally inexpensive to evaluate and require much less memory than non-parametric alternatives. The parameters are found in a straightforward way by the linear least squares method. The use of linear fit makes the fitting process stable and predictable enough to be used in non-supervised mode. Several techniques that allow to reduce the storage and processing power requirements were developed. A software library for fitting simulated and measured light response with spline functions was developed and integrated into an open source software package ANTS2 designed for simulation and data processing for Anger camera-type detectors.
Bayesian spatial semi-parametric modeling of HIV variation in Kenya.
Directory of Open Access Journals (Sweden)
Oscar Ngesa
Full Text Available Spatial statistics has seen rapid application in many fields, especially epidemiology and public health. Many studies, nonetheless, make limited use of the geographical location information and also usually assume that the covariates, which are related to the response variable, have linear effects. We develop a Bayesian semi-parametric regression model for HIV prevalence data. Model estimation and inference is based on fully Bayesian approach via Markov Chain Monte Carlo (McMC. The model is applied to HIV prevalence data among men in Kenya, derived from the Kenya AIDS indicator survey, with n = 3,662. Past studies have concluded that HIV infection has a nonlinear association with age. In this study a smooth function based on penalized regression splines is used to estimate this nonlinear effect. Other covariates were assumed to have a linear effect. Spatial references to the counties were modeled as both structured and unstructured spatial effects. We observe that circumcision reduces the risk of HIV infection. The results also indicate that men in the urban areas were more likely to be infected by HIV as compared to their rural counterpart. Men with higher education had the lowest risk of HIV infection. A nonlinear relationship between HIV infection and age was established. Risk of HIV infection increases with age up to the age of 40 then declines with increase in age. Men who had STI in the last 12 months were more likely to be infected with HIV. Also men who had ever used a condom were found to have higher likelihood to be infected by HIV. A significant spatial variation of HIV infection in Kenya was also established. The study shows the practicality and flexibility of Bayesian semi-parametric regression model in analyzing epidemiological data.
DIRECT MANIPULATION OF B-SPLINE SURFACES
Institute of Scientific and Technical Information of China (English)
Wang Zhiguo; Zhou Laishui; Wang Xiaoping
2005-01-01
Engineering design and geometric modeling often require the ability to modify the shape of parametric curves and surfaces so that their shape satisfy some given geometric constraints, including point, normal vector, curve and surface. Two approaches are presented to directly manipulate the shape of B-spline surface. The former is based on the least-square, whereas the latter is based on minimizing the bending energy of surface. For each method, since unified and explicit formulae are derived to compute new control points of modified surface, these methods are simple, fast and applicable for CAD systems. Algebraic technique is used to simplify the computation of B-spline composition and multiplication. Comparisons and examples are also given.
The basis spline method and associated techniques
International Nuclear Information System (INIS)
We outline the Basis Spline and Collocation methods for the solution of Partial Differential Equations. Particular attention is paid to the theory of errors, and the handling of non-self-adjoint problems which are generated by the collocation method. We discuss applications to Poisson's equation, the Dirac equation, and the calculation of bound and continuum states of atomic and nuclear systems. 12 refs., 6 figs
Interaction Spline Models and Their Convergence Rates
Chen, Zehua
1991-01-01
We consider interaction splines which model a multivariate regression function $f$ as a constant plus the sum of functions of one variable (main effects), plus the sum of functions of two variables (two-factor interactions), and so on. The estimation of $f$ by the penalized least squares method and the asymptotic properties of the models are studied in this article. It is shown that, under some regularity conditions on the data points, the expected squared error averaged over the data points ...
Marginal longitudinal semiparametric regression via penalized splines
Al Kadiri, M.
2010-08-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
Bernuau spline wavelets and Sturmian sequences
Andrle, Miroslav; Burdik, Cestmir; Gazeau, Jean-Pierre
2003-01-01
A spline wavelets construction of class C^n(R) supported by sequences of aperiodic discretizations of R is presented. The construction is based on multiresolution analysis recently elaborated by G. Bernuau. At a given scale, we consider discretizations that are sets of left-hand ends of tiles in a self-similar tiling of the real line with finite local complexity. Corresponding tilings are determined by two-letter Sturmian substitution sequences. We illustrate the construction with examples ha...
Scalable low-complexity B-spline discretewavelet transform architecture
Martina, Maurizio; Masera, Guido; Piccinini, Gianluca
2010-01-01
A scalable discrete wavelet transform architecture based on the B-spline factorisation is presented. In particular, it is shown that several wavelet filters of practical interest have a common structure in the distributed part of their B-spline factorisation. This common structure is effectively exploited to achieve scalability and to save multipliers compared with a direct polyphase B-spline implementation. Since the proposed solution is more robust to coefficient quantisation than direct po...
An Areal Isotropic Spline Filter for Surface Metrology
Zhang, Hao; Tong, Mingsi; Chu, Wei
2015-01-01
This paper deals with the application of the spline filter as an areal filter for surface metrology. A profile (2D) filter is often applied in orthogonal directions to yield an areal filter for a three-dimensional (3D) measurement. Unlike the Gaussian filter, the spline filter presents an anisotropic characteristic when used as an areal filter. This disadvantage hampers the wide application of spline filters for evaluation and analysis of areal surface topography. An approximation method is p...
Bayesian Kernel Mixtures for Counts
Canale, Antonio; David B Dunson
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviatio...
Pseudo-cubic thin-plate type Spline method for analyzing experimental data
International Nuclear Information System (INIS)
A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs
Application of spline wavelet transform in differential of electroanalytical signal
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Investigating characteristics of spline wavelet, we found that if the two-order spline function, the derivative function of the three-order B spline function, is used as the wavelet base function, the spline wavelet transform has both the property of denoising and that of differential. As a result, the relation between the spline wavelet transform and the differential was studied in theory. Experimental results show that the spline wavelet transform can well be applied to the differential of the electroanalytical signal. Compared with other kinds of wavelet transform, the spline wavelet trans-form has a characteristic of differential. Compared with the digital differential and simulative differential with electronic circuit, the spline wavelet transform not only can carry out denoising and differential for a signal, but also has the ad-vantages of simple operation and small quantity of calcula-tion, because step length, RC constant and other kinds of parameters need not be selected. Compared with Alexander Kai-man Leung's differential method, the differential method with spline wavelet transform has the characteristic that the differential order is not dependent on the number of data points in the original signal.
Faired MISO B-Spline Fuzzy Systems and Its Applications
Directory of Open Access Journals (Sweden)
Tan Yanhua
2013-01-01
Full Text Available We construct two classes of faired MISO B-spline fuzzy systems using the fairing method in computer-aided geometric design (CAGD for reducing adverse effects of the inexact data. Towards this goal, we generalize the faring method to high-dimension cases so that the faring method only for SISO and DISO B-spline fuzzy systems is extended to fair the MISO ones. Then the problem to construct a faired MISO B-spline fuzzy systems is transformed into solving an optimization problem with a strictly convex quadratic objective function and the unique optimal solution vector is taken as linear combination coefficients of the basis functions for a certain B-spline fuzzy system to obtain a faired MISO B-spline fuzzy system. Furthermore, we design variable universe adaptive fuzzy controllers by B-spline fuzzy systems and faired B-spline fuzzy systems to stabilize the double inverted pendulum. The simulation results show that the controllers by faired B-spline fuzzy systems perform better than those by B-spline fuzzy systems, especially when the data for fuzzy systems are inexact.
Campanelli, L
2016-01-01
In the Ratra scenario of inflationary magnetogenesis, the kinematic coupling between the photon and the inflaton undergoes a nonanalytical jump at the end of inflation. Using smooth interpolating analytical forms of the coupling function, we show that such unphysical jump does not invalidate the main prediction of the model, which still represents a viable mechanism for explaining cosmic magnetization. Nevertheless, there is a spurious result associated with the nonanaliticity of the coupling, to wit, the prediction that the spectrum of created photons has a power-law decay in the ultraviolet regime. This issue is discussed using both semiclassical approximation and smooth coupling functions.
Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines
Bartoň, Michael
2015-10-24
We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.
Survival Analysis with Multivariate adaptive Regression Splines
Kriner, Monika
2007-01-01
Multivariate adaptive regression splines (MARS) are a useful tool to identify linear and nonlinear eﬀects and interactions between two covariates. In this dissertation a new proposal to model survival type data with MARS is introduced. Martingale and deviance residuals of a Cox PH model are used as response in a common MARS approach to model functional forms of covariate eﬀects as well as possible interactions in a data-driven way. Simulation studies prove that the new method yields a bett...
Cylindrical Helix Spline Approximation of Spatial Curves
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, we present a new method for approximating spatial curves with a G1 cylindrical helix spline within a prescribed tolerance. We deduce the general formulation of a cylindrical helix,which has 11 freedoms. This means that it needs 11 restrictions to determine a cylindrical helix. Given a spatial parametric curve segment, including the start point and the end point of this segment, the tangent and the principal normal of the start point, we can always find a cylindrical segment to interpolate the given direction and position vectors. In order to approximate the known parametric curve within the prescribed tolerance, we adopt the trial method step by step. First, we must ensure the helix segment to interpolate the given two end points and match the principal normal and tangent of the start point, and then, we can keep the deviation between the cylindrical helix segment and the known curve segment within the prescribed tolerance everywhere. After the first segment had been formed, we can construct the next segment. Circularly, we can construct the G1 cylindrical helix spline to approximate the whole spatial parametric curve within the prescribed tolerance. Several examples are also given to show the efficiency of this method.
On spline approximation of sliced inverse regression
Institute of Scientific and Technical Information of China (English)
2007-01-01
The dimension reduction is helpful and often necessary in exploring the nonparametric regression structure.In this area,Sliced inverse regression (SIR) is a promising tool to estimate the central dimension reduction (CDR) space.To estimate the kernel matrix of the SIR,we herein suggest the spline approximation using the least squares regression.The heteroscedasticity can be incorporated well by introducing an appropriate weight function.The root-n asymptotic normality can be achieved for a wide range choice of knots.This is essentially analogous to the kernel estimation.Moreover, we also propose a modified Bayes information criterion (BIC) based on the eigenvalues of the SIR matrix.This modified BIC can be applied to any form of the SIR and other related methods.The methodology and some of the practical issues are illustrated through the horse mussel data.Empirical studies evidence the performance of our proposed spline approximation by comparison of the existing estimators.
Use of Splines in Handwritten Character Recognition
Directory of Open Access Journals (Sweden)
Sunil Kumar
2010-10-01
Full Text Available Handwritten Character Recognition is software used to identify the handwritten characters and receive and interpret intelligible andwritten input from sources such as manuscript documents. The recent past several years has seen the development of many systems which are able to simulate the human brain actions. Among the many, the neural networks and the artificial intelligence are the most two important paradigms used. In this paper we propose a new algorithm for recognition of handwritten texts based on the spline function and neural network is proposed. In this approach the converse order of thehandwritten character structure task is used to recognize the character. The spline function and the steepest descent methodsare applied on the optimal notes to interpolate and approximatecharacter shape. The sampled data of the handwritten text are used to obtain these optimal notes. Each character model is constructed by training the sequence of optimal notes using the neural network. Lastly the unknown input character is compared by all characters models to get the similitude scores.
Kirstein, Roland
2005-01-01
This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...
Exponential B-splines and the partition of unity property
DEFF Research Database (Denmark)
Christensen, Ole; Massopust, Peter
2012-01-01
We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...
Nonlinear and fault-tolerant flight control using multivariate splines
Tol, H.J.; De Visser, C.C.; Van Kampen, E.J.; Chu, Q.P.
2015-01-01
This paper presents a study on fault tolerant flight control of a high performance aircraft using multivariate splines. The controller is implemented by making use of spline model based adaptive nonlinear dynamic inversion (NDI). This method, indicated as SANDI, combines NDI control with nonlinear c
A family of quasi-cubic blended splines and applications
Institute of Scientific and Technical Information of China (English)
SU Ben-yue; TAN Jie-qing
2006-01-01
A class of quasi-cubic B-spline base functions by trigonometric polynomials are established which inherit properties similar to those of cubic B-spline bases. The corresponding curves with a shape parameter α, defined by the introduced base functions, include the B-spline curves and can approximate the B-spline curves from both sides. The curves can be adjusted easily by using the shape parameter α, where dpi(α,t) is linear with respect to dα for the fixed t. With the shape parameter chosen properly,the defined curves can be used to precisely represent straight line segments, parabola segments, circular arcs and some transcendental curves, and the corresponding tensor product surfaces can also represent spherical surfaces, cylindrical surfaces and some transcendental surfaces exactly. By abandoning positive property, this paper proposes a new C2 continuous blended interpolation spline based on piecewise trigonometric polynomials associated with a sequence of local parameters. Illustration showed that the curves and surfaces constructed by the blended spline can be adjusted easily and freely. The blended interpolation spline curves can be shape-preserving with proper local parameters since these local parameters can be considered to be the magnification ratio to the length of tangent vectors at the interpolating points. The idea is extended to produce blended spline surfaces.
A Bayesian semiparametric approach with change points for spatial ordinal data.
Cai, Bo; Lawson, Andrew B; McDermott, Suzanne; Aelion, C Marjorie
2016-04-01
The change-point model has drawn much attention over the past few decades. It can accommodate the jump process, which allows for changes of the effects before and after the change point. Intellectual disability is a long-term disability that impacts performance in cognitive aspects of life and usually has its onset prior to birth. Among many potential causes, soil chemical exposures are associated with the risk of intellectual disability in children. Motivated by a study for soil metal effects on intellectual disability, we propose a Bayesian hierarchical spatial model with change points for spatial ordinal data to detect the unknown threshold effects. The spatial continuous latent variable underlying the spatial ordinal outcome is modeled by the multivariate Gaussian process, which captures spatial variation and is centered at the nonlinear mean. The mean function is modeled by using the penalized smoothing splines for some covariates with unknown change points and the linear regression for the others. Some identifiability constraints are used to define the latent variable. A simulation example is presented to evaluate the performance of the proposed approach with the competing models. A retrospective cohort study for intellectual disability in South Carolina is used as an illustration. PMID:23070600
Testing for additivity with B-splines
Institute of Scientific and Technical Information of China (English)
2007-01-01
Regression splines are often used for fitting nonparametric functions, and they work especially well for additivity models. In this paper, we consider two simple tests of additivity: an adaptation of Tukey’s one degree of freedom test and a nonparametric version of Rao’s score test. While the Tukey-type test can detect most forms of the local non-additivity at the parametric rate of O(n-1/2), the score test is consistent for all alternative at a nonparametric rate. The asymptotic distribution of these test statistics is derived under both the null and local alternative hypotheses. A simulation study is conducted to compare their finite-sample performances with some existing kernel-based tests. The score test is found to have a good overall performance.
Testing for additivity with B-splines
Institute of Scientific and Technical Information of China (English)
Heng-jian CUI; Xu-ming HE; Li LIU
2007-01-01
Regression splines are often used for fitting nonparametric functions, and they work especially well for additivity models. In this paper, we consider two simple tests of additivity: an adaptation of Tukey's one degree of freedom test and a nonparametric version of Rao's score test. While the Tukey-type test can detect most forms of the local non-additivity at the parametric rate of O(n-1/2), the score test is consistent for all alternative at a nonparametric rate. The asymptotic distribution of these test statistics is derived under both the null and local alternative hypotheses. A simulation study is conducted to compare their finite-sample performances with some existing kernelbased tests. The score test is found to have a good overall performance.
Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel
2013-01-01
Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean
Institute of Scientific and Technical Information of China (English)
Joong-Hyun Rhim; Doo-Yeoun Cho; Kyu-Yeul Lee; Tae-Wan Kim
2006-01-01
We propose a method that automatically generates discrete bicubic G1 continuous B-spline surfaces that interpolate the curve network of a ship hullform. First, the curves in the network are classified into two types: boundary curves and "reference curves". The boundary curves correspond to a set of rectangular (or triangular) topological type that can be represented with tensor-product (or degenerate) B-spline surface patches. Next, in the interior of the patches,surface fitting points and cross boundary derivatives are estimated from the reference curves by constructing "virtual" isoparametric curves. Finally, a discrete G1 continuous B-spline surface is generated by a surface fitting algorithm. Several smooth ship hullform surfaces generated from curve networks corresponding to actual ship hullforms demonstrate the quality of the method.
B-spline Collocation with Domain Decomposition Method
International Nuclear Information System (INIS)
A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.
Numerical simulation of involutes spline shaft in cold rolling forming
Institute of Scientific and Technical Information of China (English)
王志奎; 张庆
2008-01-01
Design of forming dies and whole process of simulation of cold rolling involutes spline can be realized by using of CAD software of PRO-E and CAE software of DEFORM-3D. Software DEFORM-3D provides an automatic and optimized remeshing function, especially for the large deformation. In order to use this function sufficiently, simulation of cold rolling involutes spline can be implemented indirectly. The relationship between die and workpiece, forming force and characteristic of deformation in the forming process of cold rolling involutes spline are analyzed and researched. Meanwhile, reliable proofs for the design of dies and deforming equipment are provided.
A New Local Control Spline with Shape Parameters for CAD／CAM
Institute of Scientific and Technical Information of China (English)
秦开怀; 孙家广
1993-01-01
Anew local control spline based on shape parameterw with G3 continuity,called BLC-spline,is pro* posed.Not only is BLC-spline very smoot,but also the spline curve's characteristic polygon has only three control vertices,and the characteristic polyhedron has only nine control vertices.The behavior of Iocal control of BLC-spline is better than that of the other splines such as cubic Bezier,B and Beta-spline.The three shape parameters β0，β1and β2 of BLC-spline,which are independent of the control vertices,may be altered to change the shape of the curve or surface.It is shown that BLC-spline may be used to construcet a space are spline for DNC machining directly.That is a powerful tool for the design and manufacture of curves and surfaces in integrated CAD/CAM systems.
Modeling terminal ballistics using blending-type spline surfaces
Pedersen, Aleksander; Bratlie, Jostein; Dalmo, Rune
2014-12-01
We explore using GERBS, a blending-type spline construction, to represent deform able thin-plates and model terminal ballistics. Strategies to construct geometry for different scenarios of terminal ballistics are proposed.
Preference learning with evolutionary Multivariate Adaptive Regression Spline model
DEFF Research Database (Denmark)
Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll
2015-01-01
This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...
Estimating Financial Trends by Spline Fitting via Fisher Algoritm
BARAN, Mehmet; SÖNMEZER, Sıtkı; UÇAR, Abdulvahid
2015-01-01
Trends have a crucial role in finance such as setting investment strategies and technical analysis. Determining trend changes in an optimal way is the main aim of this study. The model of this study improves the optimality by spline fitting to the equations to reduce the error terms. The results show that spline fitting is more efficient compared to line fitting by % and Fisher Method by %. This method may be used to determine regime switches as well.
Spline discrete differential forms. Application to Maxwell' s equations.
Back, Aurore; Sonnendrücker, Eric
2011-01-01
We construct a new set of discrete differential forms based on B-splines of arbitrary degree as well as an associated Hodge operator. The theory is first developed in 1D and then extended to multi-dimension using tensor products. We link our discrete differential forms with the theory of chains and cochains. The spline discrete differential forms are then applied to the numerical solution of Maxwell's equations.
Representative fretting fatigue testing and prediction for splined couplings
Houghton, Dean
2009-01-01
Spline couplings are a compact and efficient means for transferring torque between shafts in gas turbine aeroengines. With competition in the aerospace market and the need to reduce fuel burn from the flight carriers, there is an ever-present requirement for enhanced performance. Spline couplings are complex components that can fail from a variety of mechanisms, and are susceptible to fretting wear and fretting fatigue (FF). Due to the expensive nature of full-scale testing, this thesis inves...
Spline trigonometric bases and their properties
International Nuclear Information System (INIS)
A family of pairs of biorthonormal systems is constructed such that for each p element of (1,∞) one of these systems is a basis in the space Lp(a,b), while the other is the dual basis in Lq(a,b) (here 1/p+1/q=1). The functions in the first system are products of trigonometric and algebraic polynomials; the functions in the second are products of trigonometric polynomials and the derivatives of B-splines. The asymptotic behaviour of the Lebesgue functions of the constructed systems is investigated. In particular, it is shown that the dominant terms of pointwise asymptotic expansions for the Lebesgue functions have everywhere (except at certain singular points) the form 4/π2 ln n (that is, the same as in the case of an orthonormal trigonometric system). Interpolation representations with multiple nodes for entire functions of exponential type σ are obtained. These formulae involve a uniform grid; however, by contrast with Kotel'nikov's theorem, where the mesh of the grid is π/σ and decreases as the type of the entire function increases, in the representations obtained the nodes of interpolation can be kept independent of σ, and their multiplicity increases as the type of the interpolated function increases. One possible application of such representations (particularly, their multidimensional analogues) is an effective construction of asymptotically optimal approximation methods by means of scaling and argument shifts of a fixed function (wavelets, grid projection methods, and so on)
Railroad inspection based on ACFM employing a non-uniform B-spline approach
Chacón Muñoz, J. M.; García Márquez, F. P.; Papaelias, M.
2013-11-01
The stresses sustained by rails have increased in recent years due to the use of higher train speeds and heavier axle loads. For this reason surface and near-surface defects generate by Rolling Contact Fatigue (RCF) have become particularly significant as they can cause unexpected structural failure of the rail, resulting in severe derailments. The accident that took place in Hatfield, UK (2000), is an example of a derailment caused by the structural failure of a rail section due to RCF. Early detection of RCF rail defects is therefore of paramount importance to the rail industry. The performance of existing ultrasonic and magnetic flux leakage techniques in detecting rail surface-breaking defects, such as head checks and gauge corner cracking, is inadequate during high-speed inspection, while eddy current sensors suffer from lift-off effects. The results obtained through rail inspection experiments under simulated conditions using Alternating Current Field Measurement (ACFM) probes, suggest that this technique can be applied for the accurate and reliable detection of surface-breaking defects at high inspection speeds. This paper presents the B-Spline approach used for the accurate filtering the noise of the raw ACFM signal obtained during high speed tests to improve the reliability of the measurements. A non-uniform B-spline approximation is employed to calculate the exact positions and the dimensions of the defects. This method generates a smooth approximation similar to the ACFM dataset points related to the rail surface-breaking defect.
Bayesian artificial intelligence
Korb, Kevin B
2010-01-01
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente
DEFF Research Database (Denmark)
Jensen, Finn Verner; Nielsen, Thomas Dyhre
2016-01-01
Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes and...... largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...
Robust Filtering and Smoothing with Gaussian Processes
Deisenroth, Marc Peter; Turner, Ryan; Huber, Marco F.; Uwe D. Hanebeck; Rasmussen, Carl Edward
2012-01-01
We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models. GPs are gaining increasing importance in signal processing, machine learning, robotics, and control for representing unknown system functions by posterior probability distributions. This modern way of "system identification" is more robust than finding p...
Nonequilibrium flows with smooth particle applied mechanics
Energy Technology Data Exchange (ETDEWEB)
Kum, O.
1995-07-01
Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (u{rho}) and (T{rho}), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.
An accurate spline polynomial cubature formula for double integration with logarithmic singularity
Bichi, Sirajo Lawan; Eshkuvatov, Z. K.; Long, N. M. A. Nik; Bello, M. Y.
2016-06-01
The paper studied the integration of logarithmic singularity problem J (y ¯)= ∬ ∇ζ (y ¯)l o g |y ¯-y¯0*|d A , where y ¯=(α ,β ), y¯0=(α0,β0) the domain ∇ is rectangle ∇ = [r1, r2] × [r3, r4], the arbitrary point y ¯∈∇ and the fixed point y¯0∈∇. The given density function ζ(y ¯), is smooth on the rectangular domain ∇ and is in the functions class C2,τ (∇). Cubature formula (CF) for double integration with logarithmic singularities (LS) on a rectangle ∇ is constructed by applying type (0, 2) modified spline function DΓ(P). The results obtained by testing the density functions ζ(y ¯) as linear and absolute value functions shows that the constructed CF is highly accurate.
Adaptive non-uniform B-spline dictionaries on a compact interval
Rebollo-Neira, Laura
2009-01-01
Non-uniform B-spline dictionaries on a compact interval are discussed. For each given partition, dictionaries of B-spline functions for the corresponding spline space are constructed. It is asserted that, by dividing the given partition into subpartitions and joining together the bases for the concomitant subspaces, slightly redundant dictionaries of B-splines functions are obtained. Such dictionaries are proved to span the spline space associated to the given partition. The proposed construction is shown to be potentially useful for the purpose of sparse signal representation. With that goal in mind, spline spaces specially adapted to produce a sparse representation of a given signal are considered.
Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B
2013-01-01
FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear
Yuan, Ying; MacKinnon, David P.
2009-01-01
This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...
Introducing two hyperparameters in Bayesian estimation of wave spectra
Nielsen, Ulrik Dam
2008-01-01
An estimate of the on-site wave spectrum can be obtained from measured ship responses by use of Bayesian modelling, which means that the wave spectrum is found as the optimum solution from a probabilistic viewpoint. The paper describes the introduction of two hyperparameters into Bayesian modelling so that the prior information included in the modelling is based on two constraints: the wave spectrum must be smooth directional-wise as well as frequency-wise. Traditionally, only one hyperparame...
Bayesian Games with Intentions
Bjorndahl, Adam; Halpern, Joseph Y.; Pass, Rafael
2016-01-01
We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.
Fingerprint Representation Methods Based on B-Spline Functions
Institute of Scientific and Technical Information of China (English)
Ruan Ke; Xia De-lin; Yan Pu-liu
2004-01-01
The global characteristics of a fingerprint image such as the ridge shape and ridge topology are often ignored in most automatic fingerprint verification system. In this paper, a new representative method based on B-Spline curve is proposed to address this problem. The resultant B-Spline curves can represent the global characteristics completely and the curves are analyzable and precise. An algorithm is also proposed to extract the curves from the fingerprint image. In addition to preserve the most information of the fingerprint image, the knot-points number of the B-Spline curve is reduced to minimum in this algorithm. At the same time, the influence of the fingerprint image noise is discussed. In the end, an example is given to demonstrate the effectiveness of the representation method.
Image Compression and Reconstruction using Cubic Spline Interpolation Technique
Directory of Open Access Journals (Sweden)
R. Muthaiah
2008-01-01
Full Text Available A new dimension of image compression using random pixels of irregular sampling and image reconstruction using cubic-spline interpolation techniques proposed in this paper. It also covers the wide field of multimedia communication concerned with multimedia messaging (MMS and image transfer through mobile phones and tried to find a mechanism to transfer images with minimum bandwidth requirement. This method would provide a better efficiency both in pixel reconstruction and color reproduction. The discussion covers theoretical techniques of random pixel selection, transfer and implementation of efficient reconstruction with cubic spline interpolation.
Anacleto Junior, Osvaldo; Queen, Catriona; Albers, Casper
2013-01-01
Traffic flow data are routinely collected for many networks worldwide. These invariably large data sets can be used as part of a traffic management system, for which good traffic flow forecasting models are crucial. The linear multiregression dynamic model (LMDM) has been shown to be promising for forecasting flows, accommodating multivariate flow time series, while being a computationally simple model to use. While statistical flow forecasting models usually base their forecasts on flow da...
ibr: Iterative bias reduction multivariate smoothing
Energy Technology Data Exchange (ETDEWEB)
Hengartner, Nicholas W [Los Alamos National Laboratory; Cornillon, Pierre-andre [AGRO-SUP, FRANCE; Matzner - Lober, Eric [RENNES 2, FRANCE
2009-01-01
Regression is a fundamental data analysis tool for relating a univariate response variable Y to a multivariate predictor X {element_of} E R{sup d} from the observations (X{sub i}, Y{sub i}), i = 1,...,n. Traditional nonparametric regression use the assumption that the regression function varies smoothly in the independent variable x to locally estimate the conditional expectation m(x) = E[Y|X = x]. The resulting vector of predicted values {cflx Y}{sub i} at the observed covariates X{sub i} is called a regression smoother, or simply a smoother, because the predicted values {cflx Y}{sub i} are less variable than the original observations Y{sub i}. Linear smoothers are linear in the response variable Y and are operationally written as {cflx m} = X{sub {lambda}}Y, where S{sub {lambda}} is a n x n smoothing matrix. The smoothing matrix S{sub {lambda}} typically depends on a tuning parameter which we denote by {lambda}, and that governs the tradeoff between the smoothness of the estimate and the goodness-of-fit of the smoother to the data by controlling the effective size of the local neighborhood over which the responses are averaged. We parameterize the smoothing matrix such that large values of {lambda} are associated to smoothers that averages over larger neighborhood and produce very smooth curves, while small {lambda} are associated to smoothers that average over smaller neighborhood to produce a more wiggly curve that wants to interpolate the data. The parameter {lambda} is the bandwidth for kernel smoother, the span size for running-mean smoother, bin smoother, and the penalty factor {lambda} for spline smoother.
Approximating Spline filter: New Approach for Gaussian Filtering in Surface Metrology
Hao Zhang; Yibao Yuan
2009-01-01
This paper presents a new spline filter named approximating spline filter for surface metrology. The purpose is to provide a new approach of Gaussian filter and evaluate the characteristics of an engineering surface more accurately and comprehensively. First, the configuration of approximating spline filter is investigated, which describes that this filter inherits all the merits of an ordinary spline filter e.g. no phase distortion and no end distortion. Then, the approximating coefficient s...
Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System
Feng Chen; Zhansheng Liu; Guang Zhao
2008-01-01
Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases non...
Exploiting correlation and budget constraints in Bayesian multi-armed bandit optimization
Hoffman, Matthew W.; Shahriari, Bobak; De Freitas, Nando
2013-01-01
We address the problem of finding the maximizer of a nonlinear smooth function, that can only be evaluated point-wise, subject to constraints on the number of permitted function evaluations. This problem is also known as fixed-budget best arm identification in the multi-armed bandit literature. We introduce a Bayesian approach for this problem and show that it empirically outperforms both the existing frequentist counterpart and other Bayesian optimization methods. The Bayesian approach place...
Computation Of An Optimal Laser Cavity Using Splines
Pantelic, Dejan V.; Janevski, Zoran D.
1989-03-01
As an attempt to improve the efficiency of a solid state laser cavity, a non-elliptical cavity is proposed. Efficiency was calculated by the ray trace method and the cavity was simulated using a novel approach with splines. Computation shows that substantial gain in efficiency can be achieved for a close coupled configuration.
Sparse image representation by discrete cosine/spline based dictionaries
Bowley, James
2009-01-01
Mixed dictionaries generated by cosine and B-spline functions are considered. It is shown that, by highly nonlinear approaches such as Orthogonal Matching Pursuit, the discrete version of the proposed dictionaries yields a significant gain in the sparsity of an image representation.
A new kind of splines and their use for fast ray-tracing in reflective cavities
Pantelic, Dejan V.; Janevski, Zoran D.
1989-08-01
In this paper we are presenting a new kind of splines that are very effective in ray-tracing applications. They are designed in such a way to enable the fast and efficient computation of line-spline intersections (line representing the light ray, and spline representing the reflective cavity). These splines are piecewise parabolic polynomials, but with additional degrees of freedom. Polynomial sections of the spline can be rotated to a certain angle (each section has its own angle of rotation), enabling thus the continuity of the first derivative.
Rubin, Donald B.
1981-01-01
The Bayesian bootstrap is the Bayesian analogue of the bootstrap. Instead of simulating the sampling distribution of a statistic estimating a parameter, the Bayesian bootstrap simulates the posterior distribution of the parameter; operationally and inferentially the methods are quite similar. Because both methods of drawing inferences are based on somewhat peculiar model assumptions and the resulting inferences are generally sensitive to these assumptions, neither method should be applied wit...
Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.
2009-12-01
A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.
Bayesian statistics an introduction
Lee, Peter M
2012-01-01
Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel
Understanding Computational Bayesian Statistics
Bolstad, William M
2011-01-01
A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Introducing two hyperparameters in Bayesian estimation of wave spectra
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2008-01-01
ranges. From numerical simulations of stochastic response measurements, it is shown that the optimal hyperparameters, determined by use of ABIC (a Bayesian Information Criterion), correspond to the best estimate of the wave spectrum, which is not always the case when only one hyperparameter is included......An estimate of the on-site wave spectrum can be obtained from measured ship responses by use of Bayesian modelling, which means that the wave spectrum is found as the optimum solution from a probabilistic viewpoint. The paper describes the introduction of two hyperparameters into Bayesian modelling...... so that the prior information included in the modelling is based on two constraints: the wave spectrum must be smooth directional-wise as well as frequency-wise. Traditionally, only one hyperparameter has been used to control the amount of smoothing applied in both the frequency and directional...
On the role of exponential splines in image interpolation.
Kirshner, Hagai; Porat, Moshe
2009-10-01
A Sobolev reproducing-kernel Hilbert space approach to image interpolation is introduced. The underlying kernels are exponential functions and are related to stochastic autoregressive image modeling. The corresponding image interpolants can be implemented effectively using compactly-supported exponential B-splines. A tight l(2) upper-bound on the interpolation error is then derived, suggesting that the proposed exponential functions are optimal in this regard. Experimental results indicate that the proposed interpolation approach with properly-tuned, signal-dependent weights outperforms currently available polynomial B-spline models of comparable order. Furthermore, a unified approach to image interpolation by ideal and nonideal sampling procedures is derived, suggesting that the proposed exponential kernels may have a significant role in image modeling as well. Our conclusion is that the proposed Sobolev-based approach could be instrumental and a preferred alternative in many interpolation tasks. PMID:19520639
MHD stability analysis using higher order spline functions
International Nuclear Information System (INIS)
The eigenvalue problem of the linearized magnetohydrodynamic (MHD) equation is formulated by using higher order spline functions as the base functions of Ritz-Galerkin approximation. When the displacement vector normal to the magnetic surface (in the magnetic surface) is interpolated by B-spline functions of degree p1 (degree p2), which is continuously c1-th (c2-th) differentiable on neighboring finite elements, the sufficient conditions for the good approximation is given by p1≥p2+1, c1≤c2+1, (c1≥1, p2≥c2≥0). The influence of the numerical integration upon the convergence of calculated eigenvalues is discussed. (author)
MHD stability analysis using higher order spline functions
Energy Technology Data Exchange (ETDEWEB)
Ida, Akihiro [Department of Energy Engineering and Science, Graduate School of Engineering, Nagoya University, Nagoya, Aichi (Japan); Todoroki, Jiro; Sanuki, Heiji
1999-04-01
The eigenvalue problem of the linearized magnetohydrodynamic (MHD) equation is formulated by using higher order spline functions as the base functions of Ritz-Galerkin approximation. When the displacement vector normal to the magnetic surface (in the magnetic surface) is interpolated by B-spline functions of degree p{sub 1} (degree p{sub 2}), which is continuously c{sub 1}-th (c{sub 2}-th) differentiable on neighboring finite elements, the sufficient conditions for the good approximation is given by p{sub 1}{>=}p{sub 2}+1, c{sub 1}{<=}c{sub 2}+1, (c{sub 1}{>=}1, p{sub 2}{>=}c{sub 2}{>=}0). The influence of the numerical integration upon the convergence of calculated eigenvalues is discussed. (author)
Smoothing methods in biometry: a historic review
Directory of Open Access Journals (Sweden)
Schimek, Michael G.
2005-06-01
Full Text Available In Germany around 25 years ago nonparametric smoothing methods have found their way into statistics and with some delay also into biometry. In the early 1980's there has been what one might call a boom in theoretical and soon after also in computational statistics. The focus was on univariate nonparametric methods for density and curve estimation. For biometry however smoothing methods became really interesting in their multivariate version. This 'change of dimensionality' is still raising open methodological questions. No wonder that the simplifying paradigm of additive regression, realized in the generalized additive models (GAM, has initiated the success story of smoothing techniques starting in the early 1990's. In parallel there have been new algorithms and important software developments, primarily in the statistical programming languages S and R. Recent developments of smoothing techniques can be found in survival analysis, longitudinal analysis, mixed models and functional data analysis, partly integrating Bayesian concepts. All new are smoothing related statistical methods in bioinformatics. In this article we aim not only at a general historical overview but also try to sketch activities in the German-speaking world. Moreover, the current situation is critically examined. Finally a large number of relevant references is given.
Frühwirth-Schnatter, Sylvia
1990-01-01
In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)
Yuan, Ying; MacKinnon, David P.
2009-01-01
In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…
Stiffness calculation and application of spline-ball bearing
Gu, Bo-Zhong; Zhou, Yu-Ming; Yang, De-Hua
2006-12-01
Spline-ball bearing is widely adopted in large precision instruments because of its distinctive performance. For the sake of carrying out detail investigation of a full instrument system, practical stiffness formulae of such bearing are introduced with elastic contact mechanics, which are successfully applied for calculating the stiffness of the bearing used in astronomical telescope. Appropriate treatment of the stiffness of such bearing in the finite element analysis is also discussed and illustrated.
Application of integrodifferential splines to solving an interpolation problem
Burova, I. G.; Rodnikova, O. V.
2014-12-01
This paper deals with cases when the values of derivatives of a function are given at grid nodes or the values of integrals of a function over grid intervals are known. Polynomial and trigonometric integrodifferential splines for computing the value of a function from given values of its nodal derivatives and/or from its integrals over grid intervals are constructed. Error estimates are obtained, and numerical results are presented.
Prediction Method for the Surface Damage in Splined Couplings
Cuffaro, Vincenzo
2013-01-01
The primary purpose of my PhD thesis was to develop design criteria and to verify procedures about fretting wear, that are applicable to crowned spline couplings of a power transmission system of the aeroengines. Fretting is a very complex phenomenon being influenced by many factors, the most important being the presence or absence of lubrication, the load distribution (contact pressure) and the sliding between the bodies. Therefore, the study of fretting needs a deep knowledge of these three...
Image super-resolution with B-Spline kernels
Baboulaz, Loïc; Dragotti, Pier Luigi
2006-01-01
A novel approach to image super-resolution is described in this paper. By modeling our image acquisition system with a Spline sampling kernel, we are able to retrieve from the samples some statistical information about the observed continuous scene before its acquisition (irradiance light-ﬁeld). This information, called continuous moments, allows to register exactly a set of low-resolution images and to ultimately generate a superresolved image. The novelty of the proposed algorithm resides i...
Control theory and splines, applied to signature storage
Enqvist, Per
1994-01-01
In this report the problem we are going to study is the interpolation of a set of points in the plane with the use of control theory. We will discover how different systems generate different kinds of splines, cubic and exponential, and investigate the effect that the different systems have on the tracking problems. Actually we will see that the important parameters will be the two eigenvalues of the control matrix.
USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES
Directory of Open Access Journals (Sweden)
Otgon Cristian
2011-07-01
Full Text Available The paper aims to approach innovative financial instruments for the management of public resources. In the category of these innovative tools have been included polynomial spline functions used for budgetary sizing in the substantiating of fiscal and budgetary policies. In order to use polynomial spline functions there have been made a number of steps consisted in the establishment of nodes, the calculation of specific coefficients corresponding to the spline functions, development and determination of errors of approximation. Also in this paper was done extrapolation of series of property tax data using polynomial spline functions of order I. For spline impelementation were taken two series of data, one reffering to property tax as a resultative variable and the second one reffering to building tax, resulting a correlation indicator R=0,95. Moreover the calculation of spline functions are easy to solve and due to small errors of approximation have a great power of predictibility, much better than using ordinary least squares method. In order to realise the research there have been used as methods of research several steps, namely observation, series of data construction and processing the data with spline functions. The data construction is a daily series gathered from the budget account, reffering to building tax and property tax. The added value of this paper is given by the possibility of avoiding deficits by using spline functions as innovative instruments in the publlic finance, the original contribution is made by the average of splines resulted from the series of data. The research results lead to conclusion that the polynomial spline functions are recommended to form the elaboration of fiscal and budgetary policies, due to relatively small errors obtained in the extrapolation of economic processes and phenomena. Future research directions are taking in consideration to study the polynomial spline functions of second-order, third
International Nuclear Information System (INIS)
The 3D tomography reconstruction has been a profitable alternative in the analysis of the FCC-type- riser (Fluid Catalytic Cracking), for appropriately keeping track of the sectional catalyst concentration distribution in the process of oil refining. The method of tomography reconstruction proposed by M. Azzi and colleagues (1991) uses a relatively small amount of trajectories (from 3 to 5) and projections (from 5 to 7) of gamma rays, a desirable feature in the industrial process tomography. Compared to more popular methods, such as the FBP (Filtered Back Projection), which demands a much higher amount of gamma rays projections, the method by Azzi et al. is more appropriate for the industrial process, where the physical limitations and the cost of the process require more economical arrangements. The use of few projections and trajectories facilitates the diagnosis in the flow dynamical process. This article proposes an improvement in the basis functions introduced by Azzi et al., through the use of quadratic B-splines functions. The use of B-splines functions makes possible a smoother surface reconstruction of the density distribution, since the functions are continuous and smooth. This work describes how the modeling can be done. (author)
Hodograph computation and bound estimation for rational B-spline curves
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
It is necessary to compute the derivative and estimate the bound of rational B-spline curves in design system, which has not been studied to date. To improve the function of computer aided design (CAD) system, and to enhance the efficiency of different algorithms of rational B-spline curves, the representation of scaled hodograph and bound of derivative magnitude of uniform planar rational B-spline curves are derived by applying Dir function, which indicates the direction of Cartesian vector between homogeneous points, discrete B-spline theory and the formula of translating the product into a summation of B-spline functions. As an application of the result above,upper bound of parametric distance between any two points in a uniform planar rational B-spline curve is further presented.
Smoothing internal migration age profiles for comparative research
Directory of Open Access Journals (Sweden)
Aude Bernard
2015-05-01
Full Text Available Background: Age patterns are a key dimension to compare migration between countries and over time. Comparative metrics can be reliably computed only if data capture the underlying age distribution of migration. Model schedules, the prevailing smoothing method, fit a composite exponential function, but are sensitive to function selection and initial parameter setting. Although non-parametric alternatives exist, their performance is yet to be established. Objective: We compare cubic splines and kernel regressions against model schedules by assessingwhich method provides an accurate representation of the age profile and best performs on metrics for comparing aggregate age patterns. Methods: We use full population microdata for Chile to perform 1,000 Monte-Carlo simulations for nine sample sizes and two spatial scales. We use residual and graphic analysis to assess model performance on the age and intensity at which migration peaks and the evolution of migration age patterns. Results: Model schedules generate a better fit when (1 the expected distribution of the age profile is known a priori, (2 the pre-determined shape of the model schedule adequately describes the true age distribution, and (3 the component curves and initial parameter values can be correctly set. When any of these conditions is not met, kernel regressions and cubic splines offer more reliable alternatives. Conclusions: Smoothing models should be selected according to research aims, age profile characteristics, and sample size. Kernel regressions and cubic splines enable a precise representation of aggregate migration age profiles for most sample sizes, without requiring parameter setting or imposing a pre-determined distribution, and therefore facilitate objective comparison.
Second-Order Cone Programming for P-Spline Simulation Metamodeling
Xia, Yu; Alizadeh, Farid
2015-01-01
This paper approximates simulation models by B-splines with a penalty on high-order finite differences of the coefficients of adjacent B-splines. The penalty prevents overfitting. The simulation output is assumed to be nonnegative. The nonnegative spline simulation metamodel is casted as a second-order cone programming model, which can be solved efficiently by modern optimization techniques. The method is implemented in MATLAB/GNU Octave.
Trivariate Polynomial Natural Spline for 3D Scattered Data Hermit Interpolation
Institute of Scientific and Technical Information of China (English)
XU YING-XIANG; GUAN L(U)-TAI; XU WEI-ZHI
2012-01-01
Consider a kind of Hermit interpolation for scattered data of 3D by trivariate polynomial natural spline,such that the objective energy functional (with natural boundary conditions) is minimal.By the spline function methods in Hilbert space and variational theory of splines,the characters of the interpolation solution and how to construct it are studied.One can easily find that the interpolation solution is a trivariate polynomial natural spline.Its expression is simple and the coefficients can be decided by a linear system.Some numerical examples are presented to demonstrate our methods.
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
International Nuclear Information System (INIS)
Spline functions have come into increasingly wide use recently in the solution of boundary-value problems of the theory of elasticity of plates and shells. This development stems from the advantages offered by spline approximations compared to other methods. Among the most important advantages are the following: (1) the behavior of the spline in the neighborhood of a point has no effect on the behavior of the spline as a whole; (2) spline interpolation converges well compared to polynomial interpolation; (3) algorithms for spline construction are simple and convenient to use. The use of spline functions to solve linear two-dimensional problems on the stress-strain state of shallow shells and plates that are rectangular in plan has proven their efficiency and made it possible to expand the range of problems that can be solved. The approach proposed in these investigations is based on reducing a linear two-dimensional problem to a unidimensional problem by the spline unidimensional problem by the method of discrete orthogonalization in the other coordinate direction. Such an approach makes it possible to account for local and edge effects in the stress state of plates and shells and obtain reliable solutions with complex boundary conditions. In the present study, we take the above approach, employing spline functions to solve linear problems, and use it to also solve geometrically nonlinear problems of the statics of shallow shells and plates with variable parameters
TRANSLATION INVARIANT RI-SPLINE WAVELET AND ITS APPLICATION ON DE-NOISING
ZHONG ZHANG; HIROSHI TODA; HISANAGA FUJIWARA; FUJI REN
2006-01-01
Wavelet Shrinkage using DWT has been widely used in de-noising although DWT has a translation variance problem. In this study, we solve this problem by using the translation invariant DWT. For this purpose, we propose a new complex wavelet, the Real-Imaginary Spline Wavelet (RI-Spline wavelet). We also propose the Coherent Dual-Tree algorithm for the RI-Spline wavelet and extend it to the 2-Dimensional. Then we apply this translation invariant RI-Spline wavelet for translation invariant de-no...
Recovery of shapes: hypermodels and Bayesian learning
International Nuclear Information System (INIS)
We discuss the problem of recovering an image from its blurred and noisy copy with the additional information that the image consists of simple shapes with sharp edges. An iterative algorithm is given, based on the idea of updating the Tikhonov type smoothness penalty on the basis of the previous estimate. This algorithm is discussed in the framework of Bayesian hypermodels and it is shown that the approach can be justified as a sequential iterative scheme for finding the mode of the posterior density. An effective numerical algorithm based on preconditioned Krylov subspace iterations is suggested and demonstrated with a computed example
Granade, Christopher; Cory, D G
2015-01-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
Bayesian exploratory factor analysis
Gabriella Conti; Sylvia Frühwirth-Schnatter; James Heckman; Rémi Piatek
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identifi cation criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study c...
Bayesian Exploratory Factor Analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...
Bayesian Exploratory Factor Analysis
Gabriella Conti; Sylvia Fruehwirth-Schnatter; Heckman, James J.; Remi Piatek
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on \\emph{ad hoc} classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo s...
Bayesian exploratory factor analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo st...
Bayesian exploratory factor analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...
Carbonetto, Peter; Kisynski, Jacek; De Freitas, Nando; Poole, David L
2012-01-01
The Bayesian Logic (BLOG) language was recently developed for defining first-order probability models over worlds with unknown numbers of objects. It handles important problems in AI, including data association and population estimation. This paper extends BLOG by adopting generative processes over function spaces - known as nonparametrics in the Bayesian literature. We introduce syntax for reasoning about arbitrary collections of objects, and their properties, in an intuitive manner. By expl...
Bayesian default probability models
Andrlíková, Petra
2014-01-01
This paper proposes a methodology for default probability estimation for low default portfolios, where the statistical inference may become troublesome. The author suggests using logistic regression models with the Bayesian estimation of parameters. The piecewise logistic regression model and Box-Cox transformation of credit risk score is used to derive the estimates of probability of default, which extends the work by Neagu et al. (2009). The paper shows that the Bayesian models are more acc...
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short. PMID:27004867
A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline
Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong
2015-11-01
The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
International Nuclear Information System (INIS)
To explore the effects of computed tomography (CT) image characteristics and B-spline knot spacing (BKS) on the spatial accuracy of a B-spline deformable image registration (DIR) in the head-and-neck geometry. The effect of image feature content, image contrast, noise, and BKS on the spatial accuracy of a B-spline DIR was studied. Phantom images were created with varying feature content and varying contrast-to-noise ratio (CNR), and deformed using a known smooth B-spline deformation. Subsequently, the deformed images were repeatedly registered with the original images using different BKSs. The quality of the DIR was expressed as the mean residual displacement (MRD) between the known imposed deformation and the result of the B-spline DIR. Finally, for three patients, head-and-neck planning CT scans were deformed with a realistic deformation field derived from a rescan CT of the same patient, resulting in a simulated deformed image and an a-priori known deformation field. Hence, a B-spline DIR was performed between the simulated image and the planning CT at different BKSs. Similar to the phantom cases, the DIR accuracy was evaluated by means of MRD. In total, 162 phantom registrations were performed with varying CNR and BKSs. MRD-values < 1.0 mm were observed with a BKS between 10–20 mm for image contrast ≥ ± 250 HU and noise < ± 200 HU. Decreasing the image feature content resulted in increased MRD-values at all BKSs. Using BKS = 15 mm for the three clinical cases resulted in an average MRD < 1.0 mm. For synthetically generated phantoms and three real CT cases the highest DIR accuracy was obtained for a BKS between 10–20 mm. The accuracy decreased with decreasing image feature content, decreasing image contrast, and higher noise levels. Our results indicate that DIR accuracy in clinical CT images (typical noise levels < ± 100 HU) will not be effected by the amount of image noise
Influence of smoothing of X-ray spectra on parameters of calibration model
International Nuclear Information System (INIS)
Parameters of the calibration model before and after smoothing of X-ray spectra have been investigated. The calibration model was calculated using multivariate procedure - namely the partial least square regression (PLS). Investigations have been performed on an example of six sets of various standards used for calibration of some instruments based on X-ray fluorescence principle. The smoothing methods were compared: regression splines, Savitzky-Golay and Discrete Fourier Transform. The calculations were performed using a software package MATLAB and some home-made programs. (author)
A Finite Element Analysis of The Rectangle Spline Broach
Directory of Open Access Journals (Sweden)
Feng Wang
2012-12-01
Full Text Available In the traditional design,the impact tools of broach maybe unable to achieve the requirements for the processing precision,or even occur the situation of partial fracture due to a exaggerated partial deformation.In this article,by using Pro/E we complete the 3D solid modeling and the dimensional parameterization of the impact tools for rectangle spline broach,then import the 3D model into ANSYS,to analyze and solve the whole process of load and deformation at work.It can effectively improve the machining accuracy and the reliability of broach, shorten the design cycle and reduce cost.
Biswas, Kingshook
2009-01-01
We use techniques of tube-log Riemann surfaces due to R.Perez-Marco to construct a hedgehog containing smooth $C^{\\infty}$ combs. The hedgehog is a common hedgehog for a family of commuting non-linearisable holomorphic maps with a common indifferent fixed point. The comb is made up of smooth curves, and is transversally bi-H\\"older regular.
Directory of Open Access Journals (Sweden)
T. von Clarmann
2014-04-01
Full Text Available The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.
Jarrow, Robert A
2014-01-01
This article reviews the forward rate curve smoothing literature. The key contribution of this review is to link the static curve fitting exercise to the dynamic and arbitrage-free models of the term structure of interest rates. As such, this review introduces more economics to an almost exclusively mathematical exercise, and it identifies new areas for research related to forward rate curve smoothing.
Vascular smooth muscle cells (SMCs) originate from multiple types of progenitor cells. In the embryo, the most well-studied SMC progenitor is the cardiac neural crest stem cell. Smooth muscle differentiation in the neural crest lineage is controlled by a combination of cell intrinsic factors, includ...
SPLINE LINEAR REGRESSION USED FOR EVALUATING FINANCIAL ASSETS 1
Directory of Open Access Journals (Sweden)
Liviu GEAMBAŞU
2010-12-01
Full Text Available One of the most important preoccupations of financial markets participants was and still is the problem of determining more precise the trend of financial assets prices. For solving this problem there were written many scientific papers and were developed many mathematical and statistical models in order to better determine the financial assets price trend. If until recently the simple linear models were largely used due to their facile utilization, the financial crises that affected the world economy starting with 2008 highlight the necessity of adapting the mathematical models to variation of economy. A simple to use model but adapted to economic life realities is the spline linear regression. This type of regression keeps the continuity of regression function, but split the studied data in intervals with homogenous characteristics. The characteristics of each interval are highlighted and also the evolution of market over all the intervals, resulting reduced standard errors. The first objective of the article is the theoretical presentation of the spline linear regression, also referring to scientific national and international papers related to this subject. The second objective is applying the theoretical model to data from the Bucharest Stock Exchange
PEMODELAN REGRESI SPLINE (Studi Kasus: Herpindo Jaya Cabang Ngaliyan
Directory of Open Access Journals (Sweden)
I MADE BUDIANTARA PUTRA
2015-06-01
Full Text Available Regression analysis is a method of data analysis to describe the relationship between response variables and predictor variables. There are two approaches to estimating the regression function. They are parametric and nonparametric approaches. The parametric approach is used when the relationship between the predictor variables and the response variables are known or the shape of the regression curve is known. Meanwhile, the nonparametric approach is used when the form of the relationship between the response and predictor variables is unknown or no information about the form of the regression function. The aim of this study are to determine the best spline nonparametric regression model on data of quality of the product, price, and advertising on purchasing decisions of Yamaha motorcycle with optimal knots point and to compare it with the multiple regression linear based on the coefficient of determination (R2 and mean square error (MSE. Optimal knot points are defined by two point knots. The result of this analysis is that for this data multiple regression linear is better than the spline regression one.
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
B-spline collocation methods for numerical solutions of the Burgers' equation
İdris Dağ; Dursun Irk; Ali Şahin
2005-01-01
Both time- and space-splitted Burgers' equations are solved numerically. Cubic B-spline collocation method is applied to the time-splitted Burgers' equation. Quadratic B-spline collocation method is used to get numerical solution of the space-splitted Burgers' equation. The results of both schemes are compared for some test problems.
Chen, T.; Gong, X.
2011-12-01
In inversion of geodetic data for distribution of fault slip minimizing the first or second order derivatives of slip across fault plane is generally employed to smooth slips of neighboring patches.Smoothing parameter is subjective selected to determine the relative weight placed on fitting data versus smoothing the slip distribution.We use the Fully Bayesian Inversion method(Fukuda,2008)to simultaneously estimate the slip distribution and smoothing parameter objectively in a Bayesian framework. The distributed slips,the posterior probability density function and the smoothing parameter is formulated with Bayes' theorem and sampled with a Markov chain Monte Carlo method. Here We will apply this method to Coseismic and Postseismic displacement data from the 2007 Solomon Islands Earthquake and compare the results of this method with generally favored method.
Bayesian Mars for uncertainty quantification in stochastic transport problems
International Nuclear Information System (INIS)
We present a method for estimating solutions to partial differential equations with uncertain parameters using a modification of the Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator. The BMARS algorithm uses Markov chain Monte Carlo (MCMC) to construct a basis function composed of polynomial spline functions, for which derivatives and integrals are straightforward to compute. We use these calculations and a modification of the curve-fitting BMARS algorithm to search for a basis function (response surface) which, in combination with its derivatives/integrals, satisfies a governing differential equation and specified boundary condition. We further show that this fit can be improved by enforcing a conservation or other physics-based constraint. Our results indicate that estimates to solutions of simple first order partial differential equations (without uncertainty) can be efficiently computed with very little regression error. We then extend the method to estimate uncertainties in the solution to a pure absorber transport problem in a medium with uncertain cross-section. We describe and compare two strategies for propagating the uncertain cross-section through the BMARS algorithm; the results from each method are in close comparison with analytic results. We discuss the scalability of the algorithm to parallel architectures and the applicability of the two strategies to larger problems with more degrees of uncertainty. (author)
Recommended practices for spline usage in CAD/CAM systems: CADCAM-007
Energy Technology Data Exchange (ETDEWEB)
Fletcher, S.K.
1984-04-01
Sandia National Laboratories has been assigned Lead Lab responsibility for integrating CAD/CAM activities throughout the DOE Nuclear Weapons Complex (NWC) and automating exchange of product definition. Transfer of splines between CAD/CAM systems presents a special problem due to the use of different spline interpolation schemes in these systems. Automated exchange via IGES (Initial Graphics Exchange Specification, ANSI Y14.26M-1981) shows promise but does not yet provide a usable data path for NWC spline needs. Data exchange today is primarily via hard copy drawings with manual data reentry and spline recomputation. In this environment, spline problems can be minimized by following the recommended practices set forth in this report.
Bayesian least squares deconvolution
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Bayesian least squares deconvolution
Ramos, A Asensio
2015-01-01
Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Loredo, T J
2004-01-01
I describe a framework for adaptive scientific exploration based on iterating an Observation--Inference--Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data--measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object--show the approach can significantly improve observational eff...
Smooth sandwich gravitational waves
Podolsky, J.
1998-01-01
Gravitational waves which are smooth and contain two asymptotically flat regions are constructed from the homogeneous pp-waves vacuum solution. Motion of free test particles is calculated explicitly and the limit to an impulsive wave is also considered.
Pachon, Ricardo; Platte, Rodrigo B.; Trefethen, Lloyd N.
2008-01-01
Algorithms are described that make it possible to manipulate piecewise-smooth functions on real intervals numerically with close to machine precision. Breakpoints are introduced in some such calculations at points determined by numerical rootfinding, and in others by recursive subdivision or automatic edge detection. Functions are represented on each smooth subinterval by Chebyshev series or interpolants. The algorithms are implemented in object-oriented MATLAB in an extension of the chebfun ...
Sebastián MV; Navascués MA
2006-01-01
Fractal methodology provides a general frame for the understanding of real-world phenomena. In particular, the classical methods of real-data interpolation can be generalized by means of fractal techniques. In this paper, we describe a procedure for the construction of smooth fractal functions, with the help of Hermite osculatory polynomials. As a consequence of the process, we generalize any smooth interpolant by means of a family of fractal functions. In particular, the elements of the cla...
Bayesian and frequentist inequality tests
David M. Kaplan; Zhuo, Longhao
2016-01-01
Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (and normal). We compare Bayesian and frequentist hypothesis tests of inequality restrictions in such cases. For finite-dimensional parameters, if the null hypothesis is that the parameter vector lies in a certain half-space, then the Bayesian test has (frequentist) size $\\alpha$; if the null hypothesis is any other convex subspace, then the Bayesian test...
Bayesian multiple target tracking
Streit, Roy L
2013-01-01
This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements
Bayesian Exploratory Factor Analysis
DEFF Research Database (Denmark)
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the...... corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...
Prediction of longitudinal dispersion coefficient using multivariate adaptive regression splines
Indian Academy of Sciences (India)
Amir Hamzeh Haghiabi
2016-07-01
In this paper, multivariate adaptive regression splines (MARS) was developed as a novel soft-computingtechnique for predicting longitudinal dispersion coefficient (DL) in rivers. As mentioned in the literature,experimental dataset related to DL was collected and used for preparing MARS model. Results of MARSmodel were compared with multi-layer neural network model and empirical formulas. To define the mosteffective parameters on DL, the Gamma test was used. Performance of MARS model was assessed bycalculation of standard error indices. Error indices showed that MARS model has suitable performanceand is more accurate compared to multi-layer neural network model and empirical formulas. Results ofthe Gamma test and MARS model showed that flow depth (H) and ratio of the mean velocity to shearvelocity (u/u^∗) were the most effective parameters on the DL.
From cardinal spline wavelet bases to highly coherent dictionaries
International Nuclear Information System (INIS)
Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation. (fast track communication)
From cardinal spline wavelet bases to highly coherent dictionaries
Energy Technology Data Exchange (ETDEWEB)
Andrle, Miroslav; Rebollo-Neira, Laura [Aston University, Birmingham B4 7ET (United Kingdom)
2008-05-02
Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation. (fast track communication)
Adaptive image coding based on cubic-spline interpolation
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation
Directory of Open Access Journals (Sweden)
Budi I. Setiawan
2007-09-01
Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.
Bayesian Geostatistical Design
DEFF Research Database (Denmark)
Diggle, Peter; Lophaven, Søren Nymand
2006-01-01
locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...
Czech Academy of Sciences Publication Activity Database
Krejsa, Jiří; Věchet, S.
Bratislava: Slovak University of Technology in Bratislava, 2010, s. 217-222. ISBN 978-80-227-3353-3. [Robotics in Education . Bratislava (SK), 16.09.2010-17.09.2010] Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot localization * bearing only beacons * Bayesian filters Subject RIV: JD - Computer Applications, Robotics
DEFF Research Database (Denmark)
Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;
2015-01-01
A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimenta...
Bayesian Independent Component Analysis
DEFF Research Database (Denmark)
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...
Noncausal Bayesian Vector Autoregression
DEFF Research Database (Denmark)
Lanne, Markku; Luoto, Jani
We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution as a...
Loredo, Thomas J.
2004-04-01
I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.
Bayesian logistic regression analysis
Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.
2012-01-01
In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an
Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System
Directory of Open Access Journals (Sweden)
Guang Zhao
2008-01-01
Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.
Investigation of spectra unfolded for a filtered x-ray diode array with improved smoothness
International Nuclear Information System (INIS)
An unfolding algorithm using parabolic B-splines to smoothly reconstruct the soft x-ray spectra from the measurements of a filtered x-ray diode array is proposed. This array has been fabricated for the study of the soft x ray emitted by Z-pinch plasma. Unfolding results show that for the simulated noise-free blackbody spectra with temperature ranging from 20 to 250 eV, both the spectra and the total power are accurately recovered. Typical experimental waveforms along with the unfolded spectra and total power of x rays are presented. Possible defects due to the adoption of parabolic B-splines instead of conventionally used histograms are discussed.
Flutter Instability Speeds of Guided Splined Disks: An Experimental and Analytical Investigation
Ahmad Mohammadpanah; Hutton, Stanley G.
2015-01-01
“Guided splined disks” are defined as flat thin disks in which the inner radius of the disk is splined and matches a splined arbor that provides the driving torque for rotating the disk. Lateral constraint for the disk is provided by space fixed guide pads. Experimental lateral displacement of run-up tests of such a system is presented, and the flutter instability zones are identified. The results indicate that flutter instability occurs at speeds when a backward travelling wave of a mode mee...
Quintic nonpolynomial spline solutions for fourth order two-point boundary value problem
Ramadan, M. A.; Lashien, I. F.; Zahra, W. K.
2009-04-01
In this paper, we develop quintic nonpolynomial spline methods for the numerical solution of fourth order two-point boundary value problems. Using this spline function a few consistency relations are derived for computing approximations to the solution of the problem. The present approach gives better approximations and generalizes all the existing polynomial spline methods up to order four. This approach has less computational cost. Convergence analysis of these methods is discussed. Two numerical examples are included to illustrate the practical usefulness of our methods.
Using spline test-day model for estimating the genetic parameters for cows milk yield
Rodica Stefania Pelmus; Mircea Catalin Rotar; Horia Grosu; Elena Ghita; Cristina Lazar
2016-01-01
Genetic parameters of Montbeliarde cows were estimated for test-day milk yield with a random regression spline model. The spline model has been considered as a good alternative to Legendre polynomials to direct interpretation of parameters. With this model the lactation curve is divided into sections by knots. The milk yield between any two knots is assumed to be changing linearly. The random regression was fitted with linear splines with five knots: 7, 54, 111, 246, 302. The herd-test-day is...
Divisibility, Smoothness and Cryptographic Applications
Naccache, David; Shparlinski, Igor E.
2008-01-01
This paper deals with products of moderate-size primes, familiarly known as smooth numbers. Smooth numbers play a crucial role in information theory, signal processing and cryptography. We present various properties of smooth numbers relating to their enumeration, distribution and occurrence in various integer sequences. We then turn our attention to cryptographic applications in which smooth numbers play a pivotal role.
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
Bayesian Magic in Asteroseismology
Kallinger, T.
2015-09-01
Only a few years ago asteroseismic observations were so rare that scientists had plenty of time to work on individual data sets. They could tune their algorithms in any possible way to squeeze out the last bit of information. Nowadays this is impossible. With missions like MOST, CoRoT, and Kepler we basically drown in new data every day. To handle this in a sufficient way statistical methods become more and more important. This is why Bayesian techniques started their triumph march across asteroseismology. I will go with you on a journey through Bayesian Magic Land, that brings us to the sea of granulation background, the forest of peakbagging, and the stony alley of model comparison.
Bayesian Nonparametric Graph Clustering
Banerjee, Sayantan; Akbani, Rehan; Baladandayuthapani, Veerabhadran
2015-01-01
We present clustering methods for multivariate data exploiting the underlying geometry of the graphical structure between variables. As opposed to standard approaches that assume known graph structures, we first estimate the edge structure of the unknown graph using Bayesian neighborhood selection approaches, wherein we account for the uncertainty of graphical structure learning through model-averaged estimates of the suitable parameters. Subsequently, we develop a nonparametric graph cluster...
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111. ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Bayesian Benchmark Dose Analysis
Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.
2014-01-01
An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...
Bayesian Generalized Rating Curves
Helgi Sigurðarson 1985
2014-01-01
A rating curve is a curve or a model that describes the relationship between water elevation, or stage, and discharge in an observation site in a river. The rating curve is fit from paired observations of stage and discharge. The rating curve then predicts discharge given observations of stage and this methodology is applied as stage is substantially easier to directly observe than discharge. In this thesis a statistical rating curve model is proposed working within the framework of Bayesian...
Heteroscedastic Treed Bayesian Optimisation
Assael, John-Alexander M.; Wang, Ziyu; Shahriari, Bobak; De Freitas, Nando
2014-01-01
Optimising black-box functions is important in many disciplines, such as tuning machine learning models, robotics, finance and mining exploration. Bayesian optimisation is a state-of-the-art technique for the global optimisation of black-box functions which are expensive to evaluate. At the core of this approach is a Gaussian process prior that captures our belief about the distribution over functions. However, in many cases a single Gaussian process is not flexible enough to capture non-stat...
Efficient Bayesian Phase Estimation
Wiebe, Nathan; Granade, Chris
2016-07-01
We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.
Brody, Samuel; Lapata, Mirella
2009-01-01
Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samp...
Bayesian Neural Word Embedding
Barkan, Oren
2016-01-01
Recently, several works in the domain of natural language processing presented successful methods for word embedding. Among them, the Skip-gram (SG) with negative sampling, known also as Word2Vec, advanced the state-of-the-art of various linguistics tasks. In this paper, we propose a scalable Bayesian neural word embedding algorithm that can be beneficial to general item similarity tasks as well. The algorithm relies on a Variational Bayes solution for the SG objective and a detailed step by ...
Back, Aurore; Sonnendrücker, Eric
2013-01-01
The notion of B-spline based discrete differential forms is recalled and along with a Finite Element Hodge operator, it is used to design new numerical methods for solving the Vlasov-Poisson equations.
Nonlinear Spline Kernel-based Partial Least Squares Regression Method and Its Application
Institute of Scientific and Technical Information of China (English)
JIA Jin-ming; WEN Xiang-jun
2008-01-01
Inspired by the traditional Wold's nonlinear PLS algorithm comprises of NIPALS approach and a spline inner function model,a novel nonlinear partial least squares algorithm based on spline kernel(named SK-PLS)is proposed for nonlinear modeling in the presence of multicollinearity.Based on the iuner-product kernel spanned by the spline basis functions with infinite numher of nodes,this method firstly maps the input data into a high dimensional feature space,and then calculates a linear PLS model with reformed NIPALS procedure in the feature space and gives a unified framework of traditional PLS"kernel"algorithms in consequence.The linear PLS in the feature space corresponds to a nonlinear PLS in the original input (primal)space.The good approximating property of spline kernel function enhances the generalization ability of the novel model,and two numerical experiments are given to illustrate the feasibility of the proposed method.
Wiegerinck, Wim; Schoenaker, Christiaan; Duane, Gregory
2016-04-01
Recently, methods for model fusion by dynamically combining model components in an interactive ensemble have been proposed. In these proposals, fusion parameters have to be learned from data. One can view these systems as parametrized dynamical systems. We address the question of learnability of dynamical systems with respect to both short term (vector field) and long term (attractor) behavior. In particular we are interested in learning in the imperfect model class setting, in which the ground truth has a higher complexity than the models, e.g. due to unresolved scales. We take a Bayesian point of view and we define a joint log-likelihood that consists of two terms, one is the vector field error and the other is the attractor error, for which we take the L1 distance between the stationary distributions of the model and the assumed ground truth. In the context of linear models (like so-called weighted supermodels), and assuming a Gaussian error model in the vector fields, vector field learning leads to a tractable Gaussian solution. This solution can then be used as a prior for the next step, Bayesian attractor learning, in which the attractor error is used as a log-likelihood term. Bayesian attractor learning is implemented by elliptical slice sampling, a sampling method for systems with a Gaussian prior and a non Gaussian likelihood. Simulations with a partially observed driven Lorenz 63 system illustrate the approach.
Bayesian theory and applications
Dellaportas, Petros; Polson, Nicholas G; Stephens, David A
2013-01-01
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...
Numerical Method Using Cubic Trigonometric B-Spline Technique for Nonclassical Diffusion Problems
Muhammad Abbas; Ahmad Abd. Majid; Ahmad Izani Md Ismail; Abdur Rashid
2014-01-01
A new two-time level implicit technique based on cubic trigonometric B-spline is proposed for the approximate solution of a nonclassical diffusion problem with nonlocal boundary constraints. The standard finite difference approach is applied to discretize the time derivative while cubic trigonometric B-spline is utilized as an interpolating function in the space dimension. The technique is shown to be unconditionally stable using the von Neumann method. Several numerical examples are discusse...
Adaptive B-spline volume representation of measured BRDF data for photorealistic rendering
Hyungjun Park; Joo-Haeng Lee
2015-01-01
Measured bidirectional reflectance distribution function (BRDF) data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volume representation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of mu...
B-SPLINE-BASED SVM MODEL AND ITS APPLICATIONS TO OIL WATER-FLOODED STATUS IDENTIFICATION
Institute of Scientific and Technical Information of China (English)
Shang Fuhua; Zhao Tiejun; Yi Xiongying
2007-01-01
A method of B-spline transform for signal feature extraction is developed. With the B-spline,the log-signal space is mapped into the vector space. An efficient algorithm based on Support Vector Machine (SVM) to automatically identify the water-flooded status of oil-saturated stratum is described.The experiments show that this algorithm can improve the performances for the identification and the generalization in the case of a limited set of samples.
Unbounded Bayesian Optimization via Regularization
Shahriari, Bobak; Bouchard-Côté, Alexandre; De Freitas, Nando
2015-01-01
Bayesian optimization has recently emerged as a popular and efficient tool for global optimization and hyperparameter tuning. Currently, the established Bayesian optimization practice requires a user-defined bounding box which is assumed to contain the optimizer. However, when little is known about the probed objective function, it can be difficult to prescribe such bounds. In this work we modify the standard Bayesian optimization framework in a principled way to allow automatic resizing of t...
Bayesian optimization for materials design
Frazier, Peter I.; Wang, Jialei
2015-01-01
We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets. Bayesian optimization guides the choice of experiments during materials design and discovery to find good material designs in as few experiments as possible. We focus on the case when materials designs are parameterized by a low-dimensional vector. Bayesian optimization is built on a statistical technique called Gaussian pro...
Micropolar Fluids Using B-spline Divergence Conforming Spaces
Sarmiento, Adel
2014-06-06
We discretized the two-dimensional linear momentum, microrotation, energy and mass conservation equations from micropolar fluids theory, with the finite element method, creating divergence conforming spaces based on B-spline basis functions to obtain pointwise divergence free solutions [8]. Weak boundary conditions were imposed using Nitsche\\'s method for tangential conditions, while normal conditions were imposed strongly. Once the exact mass conservation was provided by the divergence free formulation, we focused on evaluating the differences between micropolar fluids and conventional fluids, to show the advantages of using the micropolar fluid model to capture the features of complex fluids. A square and an arc heat driven cavities were solved as test cases. A variation of the parameters of the model, along with the variation of Rayleigh number were performed for a better understanding of the system. The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.
Generalizing smooth transition autoregressions
DEFF Research Database (Denmark)
Chini, Emilio Zanetti
We introduce a variant of the smooth transition autoregression - the GSTAR model - capable to parametrize the asymmetry in the tails of the transition equation by using a particular generalization of the logistic function. A General-to-Specific modelling strategy is discussed in detail, with part......We introduce a variant of the smooth transition autoregression - the GSTAR model - capable to parametrize the asymmetry in the tails of the transition equation by using a particular generalization of the logistic function. A General-to-Specific modelling strategy is discussed in detail...... forecasting experiment to evaluate its point and density forecasting performances. In all the cases, the dynamic asymmetry in the cycle is efficiently captured by the new model. The GSTAR beats AR and STAR competitors in point forecasting, while this superiority becomes less evident in density forecasting...
Seasonal smooth transition autoregression
Franses, Philip Hans; Bruin, Paul; Dijk, Dick van
2000-01-01
textabstractIn this paper we put forward a new time series model, which describes nonlinearity and seasonality simultaneously. We discuss its representation, estimation of the parameters and inference. This seasonal STAR (SEASTAR) model is examined for its practical usefulness by applying it to 18 quarterly industrial production series. The data are tested for smooth-transition nonlinearity and for time-varying seasonality. We find that the model fits the data well for 14 of the 18 series. We...
Revealed smooth nontransitive preferences
DEFF Research Database (Denmark)
Keiding, Hans; Tvede, Mich
2013-01-01
In the present paper, we are concerned with the behavioural consequences of consumers having nontransitive preference relations. Data sets consist of ﬁnitely many observations of price vectors and consumption bundles. A preference relation rationalizes a data set provided that for every observed ...... many observations of price vectors, lists of individual incomes and aggregate demands. We apply our main result to characterize market data sets consistent with equilibrium behaviour of pure-exchange economies with smooth nontransitive consumers....
Indian Academy of Sciences (India)
Benedictus Margaux
2015-05-01
Let be a scheme. Assume that we are given an action of the one dimensional split torus $\\mathbb{G}_{m,S}$ on a smooth affine -scheme $\\mathfrak{X}$. We consider the limit (also called attractor) subfunctor $\\mathfrak{X}_{}$ consisting of points whose orbit under the given action `admits a limit at 0’. We show that $\\mathfrak{X}_{}$ is representable by a smooth closed subscheme of $\\mathfrak{X}$. This result generalizes a theorem of Conrad et al. (Pseudo-reductive groups (2010) Cambridge Univ. Press) where the case when $\\mathfrak{X}$ is an affine smooth group and $\\mathbb{G}_{m,S}$ acts as a group automorphisms of $\\mathfrak{X}$ is considered. It also occurs as a special case of a recent result by Drinfeld on the action of $\\mathbb{G}_{m,S}$ on algebraic spaces (Proposition 1.4.20 of Drinfeld V, On algebraic spaces with an action of $\\mathfrak{G}_{m}$, preprint 2013) in case is of finite type over a field.
Bayesian nonparametric data analysis
Müller, Peter; Jara, Alejandro; Hanson, Tim
2015-01-01
This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.
Decentralized Distributed Bayesian Estimation
Czech Academy of Sciences Publication Activity Database
Dedecius, Kamil; Sečkárová, Vladimíra
Praha: ÚTIA AVČR, v.v.i, 2011 - (Janžura, M.; Ivánek, J.). s. 16-16 [7th International Workshop on Data–Algorithms–Decision Making. 27.11.2011-29.11.2011, Mariánská] R&D Projects: GA ČR 102/08/0567; GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : estimation * distributed estimation * model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/dedecius-decentralized distributed bayesian estimation.pdf
Congdon, Peter
2014-01-01
This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU
Computationally efficient Bayesian tracking
Aughenbaugh, Jason; La Cour, Brian
2012-06-01
In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.
Improved iterative Bayesian unfolding
D'Agostini, G
2010-01-01
This paper reviews the basic ideas behind a Bayesian unfolding published some years ago and improves their implementation. In particular, uncertainties are now treated at all levels by probability density functions and their propagation is performed by Monte Carlo integration. Thus, small numbers are better handled and the final uncertainty does not rely on the assumption of normality. Theoretical and practical issues concerning the iterative use of the algorithm are also discussed. The new program, implemented in the R language, is freely available, together with sample scripts to play with toy models.
Rolling Force and Rolling Moment in Spline Cold Rolling Using Slip-line Field Method
Institute of Scientific and Technical Information of China (English)
ZHANG Dawei; LI Yongtang; FU Jianhua; ZHENG Quangang
2009-01-01
Rolling force and rolling moment are prime process parameter of external spline cold rolling. However, the precise theoretical formulae of rolling force and rolling moment are still very fewer, and the determination of them depends on experience. In the present study, the mathematical models of rolling force and rolling moment are established based on stress field theory of slip-line. And the isotropic hardening is used to improve the yield criterion. Based on MATLAB program language environment, calculation program is developed according to mathematical models established. The rolling force and rolling moment could be predicted quickly via the calculation program, and then the reliability of the models is validated by FEM. Within the range of module of spline m=0.5-1.5 mm, pressure angle of reference circle α=30.0°-45.0°, and number of spline teeth Z=19-54, the rolling force and rolling moment in rolling process (finishing rolling is excluded) are researched by means of virtualizing orthogonal experiment design. The results of the present study indicate that:the influences of module and number of spline teeth on the maximum rolling force and rolling moment in the process are remarkable;in the case of pressure angle of reference circle is little, module of spline is great, and number of spline teeth is little, the peak value of rolling force in rolling process may appear in the midst of the process;the peak value of rolling moment in rolling process appears in the midst of the process, and then oscillator weaken to a stable value. The results of the present study may provide guidelines for the determination of power of the motor and the design of hydraulic system of special machine, and provide basis for the farther researches on the precise forming process of external spline cold rolling.
Bayesian Inference on Gravitational Waves
Directory of Open Access Journals (Sweden)
Asad Ali
2015-12-01
Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.
Aggregation of smooth preferences
Norman Schofield
1998-01-01
Suppose p is a smooth preference profile (for a society, N) belonging to a domain PN. Let be a voting rule, and (p)(x) be the set of alternatives in the space, W, which is preferred to x. The equilibrium E((p)) is the set {x∈W:(p)(x) is empty}. A sufficient condition for existence of E((p)) when p is convex is that a "dual", or generalized gradient, d(p)(x), is non-empty at all x. Under certain conditions the dual "field", d(p), admits a "social gradient field" (p). is called an "aggregator" ...
Incompressible smoothed particle hydrodynamics
International Nuclear Information System (INIS)
We present a smoothed particle hydrodynamic model for incompressible fluids. As opposed to solving a pressure Poisson equation in order to get a divergence-free velocity field, here incompressibility is achieved by requiring as a kinematic constraint that the volume of the fluid particles is constant. We use Lagrangian multipliers to enforce this restriction. These Lagrange multipliers play the role of non-thermodynamic pressures whose actual values are fixed through the kinematic restriction. We use the SHAKE methodology familiar in constrained molecular dynamics as an efficient method for finding the non-thermodynamic pressure satisfying the constraints. The model is tested for several flow configurations
Adaptive Dynamic Bayesian Networks
Energy Technology Data Exchange (ETDEWEB)
Ng, B M
2007-10-26
A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.
Bayesian analysis toolkit - BAT
International Nuclear Information System (INIS)
Statistical treatment of data is an essential part of any data analysis and interpretation. Different statistical methods and approaches can be used, however the implementation of these approaches is complicated and at times inefficient. The Bayesian analysis toolkit (BAT) is a software package developed in C++ framework that facilitates the statistical analysis of the data using Bayesian theorem. The tool evaluates the posterior probability distributions for models and their parameters using Markov Chain Monte Carlo which in turn provide straightforward parameter estimation, limit setting and uncertainty propagation. Additional algorithms, such as simulated annealing, allow extraction of the global mode of the posterior. BAT sets a well-tested environment for flexible model definition and also includes a set of predefined models for standard statistical problems. The package is interfaced to other software packages commonly used in high energy physics, such as ROOT, Minuit, RooStats and CUBA. We present a general overview of BAT and its algorithms. A few physics examples are shown to introduce the spectrum of its applications. In addition, new developments and features are summarized.
International Nuclear Information System (INIS)
PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data
Institute of Scientific and Technical Information of China (English)
吴宪祥; 郭宝龙; 王娟
2009-01-01
针对移动机器人路径规划问题,提出了一种基于粒了群三次样条优化的路径规划方法.借助三次样条连接描述路径,这样将路径规划问题转化为三次样条曲线的参数优化问题.借助粒了群优化算法快速收敛和全局寻优特性实现最优路径规划.实验结果表明:所提算法町以快速有效地实现障碍环境下机器人的无碰撞路径规划,规划路径平滑,利于机器人的运动控制.%A novel algorithm based on particle swarm optimization (PSO) of cubic splines is proposed for mobile robot path planning. The path is described by string of cubic splines, thus the path planning is equivalent to parameter optimization of particular cubic splines. PSO is introduced to get the optimal path for its fast convergence and global search character. Ex-perimental results show that a collision-avoidance path can be found fleetly and effectively among obstacles by the proposed algorithm. The planned path is smooth which is useful for robot motion control.
Côrtes, A.M.A.
2015-02-20
The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf-sup stable and pointwise divergence-free. When applied to discretized Stokes equations, these spaces generate a symmetric and indefinite saddle-point linear system. Krylov subspace methods are usually the most efficient procedures to solve such systems. One of such methods, for symmetric systems, is the Minimum Residual Method (MINRES). However, the efficiency and robustness of Krylov subspace methods is closely tied to appropriate preconditioning strategies. For the discrete Stokes system, in particular, block-diagonal strategies provide efficient preconditioners. In this article, we compare the performance of block-diagonal preconditioners for several block choices. We verify how the eigenvalue clustering promoted by the preconditioning strategies affects MINRES convergence. We also compare the number of iterations and wall-clock timings. We conclude that among the building blocks we tested, the strategy with relaxed inner conjugate gradients preconditioned with incomplete Cholesky provided the best results.
Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Keller, J.; Errichello, R.; Halse, C.
2013-12-01
Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.
Directory of Open Access Journals (Sweden)
Bush William S
2009-12-01
Full Text Available Abstract Background Gene-centric analysis tools for genome-wide association study data are being developed both to annotate single locus statistics and to prioritize or group single nucleotide polymorphisms (SNPs prior to analysis. These approaches require knowledge about the relationships between SNPs on a genotyping platform and genes in the human genome. SNPs in the genome can represent broader genomic regions via linkage disequilibrium (LD, and population-specific patterns of LD can be exploited to generate a data-driven map of SNPs to genes. Methods In this study, we implemented LD-Spline, a database routine that defines the genomic boundaries a particular SNP represents using linkage disequilibrium statistics from the International HapMap Project. We compared the LD-Spline haplotype block partitioning approach to that of the four gamete rule and the Gabriel et al. approach using simulated data; in addition, we processed two commonly used genome-wide association study platforms. Results We illustrate that LD-Spline performs comparably to the four-gamete rule and the Gabriel et al. approach; however as a SNP-centric approach LD-Spline has the added benefit of systematically identifying a genomic boundary for each SNP, where the global block partitioning approaches may falter due to sampling variation in LD statistics. Conclusion LD-Spline is an integrated database routine that quickly and effectively defines the genomic region marked by a SNP using linkage disequilibrium, with a SNP-centric block definition algorithm.
Directory of Open Access Journals (Sweden)
Neng Wan
2014-01-01
Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.
Book review: Bayesian analysis for population ecology
Link, William A.
2011-01-01
Brian Dennis described the field of ecology as “fertile, uncolonized ground for Bayesian ideas.” He continued: “The Bayesian propagule has arrived at the shore. Ecologists need to think long and hard about the consequences of a Bayesian ecology. The Bayesian outlook is a successful competitor, but is it a weed? I think so.” (Dennis 2004)
BSR: B-spline atomic R-matrix codes
Zatsarinny, Oleg
2006-02-01
BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput
Directory of Open Access Journals (Sweden)
Scott W. Keith
2014-09-01
Full Text Available This paper details the design, evaluation, and implementation of a framework for detecting and modeling nonlinearity between a binary outcome and a continuous predictor variable adjusted for covariates in complex samples. The framework provides familiar-looking parameterizations of output in terms of linear slope coefficients and odds ratios. Estimation methods focus on maximum likelihood optimization of piecewise linear free-knot splines formulated as B-splines. Correctly specifying the optimal number and positions of the knots improves the model, but is marked by computational intensity and numerical instability. Our inference methods utilize both parametric and nonparametric bootstrapping. Unlike other nonlinear modeling packages, this framework is designed to incorporate multistage survey sample designs common to nationally representative datasets. We illustrate the approach and evaluate its performance in specifying the correct number of knots under various conditions with an example using body mass index (BMI; kg/m2 and the complex multi-stage sampling design from the Third National Health and Nutrition Examination Survey to simulate binary mortality outcomes data having realistic nonlinear sample-weighted risk associations with BMI. BMI and mortality data provide a particularly apt example and area of application since BMI is commonly recorded in large health surveys with complex designs, often categorized for modeling, and nonlinearly related to mortality. When complex sample design considerations were ignored, our method was generally similar to or more accurate than two common model selection procedures, Schwarz’s Bayesian Information Criterion (BIC and Akaike’s Information Criterion (AIC, in terms of correctly selecting the correct number of knots. Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these conditions.
Smooth quantum gravity: Exotic smoothness and Quantum gravity
Asselmeyer-Maluga, Torsten
2016-01-01
Over the last two decades, many unexpected relations between exotic smoothness, e.g. exotic $\\mathbb{R}^{4}$, and quantum field theory were found. Some of these relations are rooted in a relation to superstring theory and quantum gravity. Therefore one would expect that exotic smoothness is directly related to the quantization of general relativity. In this article we will support this conjecture and develop a new approach to quantum gravity called \\emph{smooth quantum gravity} by using smoot...
DEFF Research Database (Denmark)
Hartelius, Karsten; Carstensen, Jens Michael
2003-01-01
A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...
Evaluation of solid–liquid interface profile during continuous casting by a spline based formalism
Indian Academy of Sciences (India)
S K Das
2001-08-01
A numerical framework has been applied which comprises of a cubic spline based collocation method to determine the solid–liquid interface profile (solidification front) during continuous casting process. The basis function chosen for the collocation algorithm to be employed in this formalism, is a cubic spline interpolation function. An iterative solution methodology has been developed to track the interface profile for copper strand of rectangular transverse section for different casting speeds. It is based on enthalpy conservation criteria at the solidification interface and the trend is found to be in good agreement with the available information in the literature although a point to point mapping of the profile is not practically realizable. The spline based collocation algorithm is found to be a reasonably efficient tool for solidification front tracking process, as a good spatial derivative approximation can be achieved incorporating simple modelling philosophy which is numerically robust and computationally cost effective.
Current trends in Bayesian methodology with applications
Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia
2015-01-01
Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on
Classification of smooth Fano polytopes
DEFF Research Database (Denmark)
Øbro, Mikkel
A simplicial lattice polytope containing the origin in the interior is called a smooth Fano polytope, if the vertices of every facet is a basis of the lattice. The study of smooth Fano polytopes is motivated by their connection to toric varieties. The thesis concerns the classification of smooth...... Fano polytopes up to isomorphism. A smooth Fano -polytope can have at most vertices. In case of vertices an explicit classification is known. The thesis contains the classification in case of vertices. Classifications of smooth Fano -polytopes for fixed exist only for . In the thesis an algorithm for...... the classification of smooth Fano -polytopes for any given is presented. The algorithm has been implemented and used to obtain the complete classification for ....
Longitudinal Cavity Mode Referenced Spline Tuning for Widely Tunable MG-Y Branch Semiconductor Laser
Directory of Open Access Journals (Sweden)
H. Heininger
2014-04-01
Full Text Available This paper presents a novel method for wavelength-continuous tuning of a MG-Y-Branch Laser that possesses an intrinsic self-calibration capability. The method utilizes the measured characteristic output power pattern caused by the internal longitudinal cavity modes of the laser device to calibrate a set of cubical spline curves. The spline curves are then used to generate the tuning currents for the two reflector sections and the phase section of the laser from an intermediate tuning control parameter. A calibration function maps the desired laser wavelength to the intermediate tuning parameter, thus enabling continuous tuning with high accuracy.
Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations
Energy Technology Data Exchange (ETDEWEB)
Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)
1996-12-31
In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).
Calculation of energy levels and wavefunctions of hydrogen molecular ion using B-splines function
International Nuclear Information System (INIS)
Energy levels and wavefunctions of the ground state and the first excited state of hydrogen molecular ion are calculated by solving stationary Schrodinger equation with B-splines functions. By adopting nuclear positions as knots of B-splines basis, high accuracy of energy levels of the ground state and the first excited state for hydrogen molecular ion can be reached even for the larger internuclear separations, and our ι dependent radial wavefunctions of the ground state are in a good agreement with those computed from GAUSSIAN chemistry software. (authors)
A weighted extended B-spline solver for bending and buckling of stiffened plates
Verschaeve, Joris C G
2015-01-01
The weighted extended B-spline method [Hoellig (2003)] is applied to bending and buckling problems of plates with different shapes and stiffener arrangements. The discrete equations are obtained from the energy contributions of the different components constituting the system by means of the Rayleigh-Ritz approach. The pre-buckling or plane stress is computed by means of Airy's stress function. A boundary data extension algorithm for the weighted extended B-spline method is derived in order to solve for inhomogeneous Dirichlet boundary conditions. A series of benchmark tests is performed touching various aspects influencing the accuracy of the method.
Bayesian Age-Period-Cohort Modeling and Prediction - BAMP
Directory of Open Access Journals (Sweden)
Volker J. Schmid
2007-10-01
Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.
A Digital-Discrete Method For Smooth-Continuous Data Reconstruction
Chen, Li
2010-01-01
A systematic digital-discrete method for obtaining continuous functions with smoothness to a certain order (C^(n)) from sample data is designed. This method is based on gradually varied functions and the classical finite difference method. This new method has been applied to real groundwater data and the results have validated the method. This method is independent from existing popular methods such as the cubic spline method and the finite element method. The new digital-discrete method has considerable advantages for a large number of real data applications. This digital method also differs from other classical discrete methods that usually use triangulations. This method can potentially be used to obtain smooth functions such as polynomials through its derivatives f^(k) and the solution for partial differential equations such as harmonic and other important equations.
Bayesian Methods and Universal Darwinism
Campbell, John
2010-01-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of...
Portfolio Allocation for Bayesian Optimization
Brochu, Eric; Hoffman, Matthew W.; De Freitas, Nando
2010-01-01
Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several differen...
Neuronanatomy, neurology and Bayesian networks
Bielza Lozoya, Maria Concepcion
2014-01-01
Bayesian networks are data mining models with clear semantics and a sound theoretical foundation. In this keynote talk we will pinpoint a number of neuroscience problems that can be addressed using Bayesian networks. In neuroanatomy, we will show computer simulation models of dendritic trees and classification of neuron types, both based on morphological features. In neurology, we will present the search for genetic biomarkers in Alzheimer's disease and the prediction of health-related qualit...
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...
Michel, Volker
2013-01-01
Lectures on Constructive Approximation: Fourier, Spline, and Wavelet Methods on the Real Line, the Sphere, and the Ball focuses on spherical problems as they occur in the geosciences and medical imaging. It comprises the author’s lectures on classical approximation methods based on orthogonal polynomials and selected modern tools such as splines and wavelets. Methods for approximating functions on the real line are treated first, as they provide the foundations for the methods on the sphere and the ball and are useful for the analysis of time-dependent (spherical) problems. The author then examines the transfer of these spherical methods to problems on the ball, such as the modeling of the Earth’s or the brain’s interior. Specific topics covered include: * the advantages and disadvantages of Fourier, spline, and wavelet methods * theory and numerics of orthogonal polynomials on intervals, spheres, and balls * cubic splines and splines based on reproducing kernels * multiresolution analysis using wavelet...
Dale Poirier
2008-01-01
This paper provides Bayesian rationalizations for White’s heteroskedastic consistent (HC) covariance estimator and various modifications of it. An informed Bayesian bootstrap provides the statistical framework.
Astrophysical Smooth Particle Hydrodynamics
Rosswog, Stephan
2009-01-01
In this review the basic principles of smooth particle hydrodynamics (SPH) are outlined in a pedagogical fashion. To start, a basic set of SPH equations that is used in many codes throughout the astrophysics community is derived explicitly. Much of SPH's success relies on its excellent conservation properties and therefore the numerical conservation of physical invariants receives much attention throughout this review. The self-consistent derivation of the SPH equations from the Lagrangian of an ideal fluid is the common theme of the remainder of the text. Such a variational approach is applied to derive a modern SPH version of Newtonian hydrodynamics. It accounts for gradients in the local resolution lengths which result in corrective, so-called "grad-h-terms". This strategy naturally carries over to the special-relativistic case for which we derive the corresponding grad-h set of equations. This approach is further generalized to the case of a fluid that evolves on a curved, but fixed background space-time.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Bayesian modeling and significant features exploration in wavelet power spectra
Directory of Open Access Journals (Sweden)
D. V. Divine
2007-01-01
Full Text Available This study proposes and justifies a Bayesian approach to modeling wavelet coefficients and finding statistically significant features in wavelet power spectra. The approach utilizes ideas elaborated in scale-space smoothing methods and wavelet data analysis. We treat each scale of the discrete wavelet decomposition as a sequence of independent random variables and then apply Bayes' rule for constructing the posterior distribution of the smoothed wavelet coefficients. Samples drawn from the posterior are subsequently used for finding the estimate of the true wavelet spectrum at each scale. The method offers two different significance testing procedures for wavelet spectra. A traditional approach assesses the statistical significance against a red noise background. The second procedure tests for homoscedasticity of the wavelet power assessing whether the spectrum derivative significantly differs from zero at each particular point of the spectrum. Case studies with simulated data and climatic time-series prove the method to be a potentially useful tool in data analysis.
Nonparametric Bayesian Classification
Coram, M A
2002-01-01
A Bayesian approach to the classification problem is proposed in which random partitions play a central role. It is argued that the partitioning approach has the capacity to take advantage of a variety of large-scale spatial structures, if they are present in the unknown regression function $f_0$. An idealized one-dimensional problem is considered in detail. The proposed nonparametric prior uses random split points to partition the unit interval into a random number of pieces. This prior is found to provide a consistent estimate of the regression function in the $\\L^p$ topology, for any $1 \\leq p < \\infty$, and for arbitrary measurable $f_0:[0,1] \\rightarrow [0,1]$. A Markov chain Monte Carlo (MCMC) implementation is outlined and analyzed. Simulation experiments are conducted to show that the proposed estimate compares favorably with a variety of conventional estimators. A striking resemblance between the posterior mean estimate and the bagged CART estimate is noted and discussed. For higher dimensions, a ...
BAT - Bayesian Analysis Toolkit
International Nuclear Information System (INIS)
One of the most vital steps in any data analysis is the statistical analysis and comparison with the prediction of a theoretical model. The many uncertainties associated with the theoretical model and the observed data require a robust statistical analysis tool. The Bayesian Analysis Toolkit (BAT) is a powerful statistical analysis software package based on Bayes' Theorem, developed to evaluate the posterior probability distribution for models and their parameters. It implements Markov Chain Monte Carlo to get the full posterior probability distribution that in turn provides a straightforward parameter estimation, limit setting and uncertainty propagation. Additional algorithms, such as Simulated Annealing, allow to evaluate the global mode of the posterior. BAT is developed in C++ and allows for a flexible definition of models. A set of predefined models covering standard statistical cases are also included in BAT. It has been interfaced to other commonly used software packages such as ROOT, Minuit, RooStats and CUBA. An overview of the software and its algorithms is provided along with several physics examples to cover a range of applications of this statistical tool. Future plans, new features and recent developments are briefly discussed.
Application of Cubic Box Spline Wavelets in the Analysis of Signal Singularities
Directory of Open Access Journals (Sweden)
Rakowski Waldemar
2015-12-01
Full Text Available In the subject literature, wavelets such as the Mexican hat (the second derivative of a Gaussian or the quadratic box spline are commonly used for the task of singularity detection. The disadvantage of the Mexican hat, however, is its unlimited support; the disadvantage of the quadratic box spline is a phase shift introduced by the wavelet, making it difficult to locate singular points. The paper deals with the construction and properties of wavelets in the form of cubic box splines which have compact and short support and which do not introduce a phase shift. The digital filters associated with cubic box wavelets that are applied in implementing the discrete dyadic wavelet transform are defined. The filters and the algorithme à trous of the discrete dyadic wavelet transform are used in detecting signal singularities and in calculating the measures of signal singularities in the form of a Lipschitz exponent. The article presents examples illustrating the use of cubic box spline wavelets in the analysis of signal singularities.
Recovery of Graded Index Profile of Planar Waveguide by Cubic Spline Function
Institute of Scientific and Technical Information of China (English)
YANG Yong; CHEN Xian-Feng; LIAO Wei-Jun; XIA Yu-Xing
2007-01-01
A method is proposed to recover the refractive index profile of graded waveguide from the effective indices by a cubic spline interpolation function. Numerical analysis of several typical index distributions show that the refractive index profile can be reconstructed closely to its exact profile by the presented interpolation model.
Numerical solution of functional integral equations by using B-splines
Directory of Open Access Journals (Sweden)
Reza Firouzdor
2014-05-01
Full Text Available This paper describes an approximating solution, based on Lagrange interpolation and spline functions, to treat functional integral equations of Fredholm type and Volterra type. This method can be extended to functional dierential and integro-dierential equations. For showing eciency of the method we give some numerical examples.
Cardiac motion tracking with multilevel B-splines and SinMod from tagged MRI
Wang, Hui; Amini, Amir A.
2011-03-01
Cardiac motion analysis can play an important role in cardiac disease diagnosis. Tagged magnetic resonance imaging (MRI) has the ability to directly and non-invasively alter tissue magnetization and produce tags on the deforming tissue. This paper proposes an approach to analysis of tagged MR images using a multilevel B-splines fitting model incorporating phase information. The novelty of the proposed technique is that phase information is extracted from SinMod.1 By using real tag intersections extracted directly from tagged MR image data and virtual tag intersections extracted from phase information, both considered to be scattered data, multilevel B-spline fitting can result in accurate displacement motion fields. The B-spline approximation which also serves to remove noise in the displacement measurements is performed without specifying control point locations explicitly and is very fast. Dense virtual tag intersections based on SinMod were created and incorporated into the multilevel B-spline fitting process. Experimental results on simulated data from the 13- parameter kinematic model of Arts et al.2 and in vivo canine data demonstrate further improvement in accuracy and effectiveness of the proposed method.
International Nuclear Information System (INIS)
The space expansion of magnetic field with median plane symmetry in Taylor series is derived in cylindrical coordinate system. The expansion is expressed with the field distribution in the median plane, which may be an analytic expression or the field values at the discrete nodes. The discrete values are fitted with bicubic spline functions and the corresponding computer program is also given
Application of Cubic Spline in the Implementation of Braces for the Case of a Child
Directory of Open Access Journals (Sweden)
Azmin Sham Rambely
2012-01-01
Full Text Available Problem statement: Orthodontic teeth movement is influenced by the characteristics of the applied force, including its magnitude and direction which normally based on the shape of ellipsoid, parabolic and U-shape that are symmetry. However, this will affect the movement of the whole set of tooth. Approach: This study intends to compare the form of general teeth with another method called cubic spline to get a minimum error in presenting the general form of teeth. Cubic spline method is applied in a mathematical model of a childâs teeth, which is produced through resignation of orthodontic wires. It is also meant to create a clear view towards the true nature of orthodontic wires. Results: Based on mathematical characteristics in the spline and the real data of a teethâs model, cubic spline shows to be very useful in reflecting the shape of a curve because the dots chosen are not totally symmetry. Conclusion/Recommendation: Therefore, symmetrical curve can be produced in teethâs shape which is basically asymmetry.
Calculations of Electron Structure of Endohedrally Confined Helium Atom with B-Spline Type Functions
Institute of Scientific and Technical Information of China (English)
QIAO HaoXue; SHI TingYun; LI BaiWen
2002-01-01
The B-spline basis set method is used to study the properties of helium confined endohedrally at thegeometrical centre of a fullerene. The boundary conditions of the wavefunctions can be simply satisfied with thismethod. From our results, the phenomenon of "mirror collapse" is found in the case of confining helium. The interestingbehaviors of confining helium are also discussed.
A Mixed Basis Density Functional Approach for Low Dimensional Systems with B-splines
Ren, Chung-Yuan; Chang, Yia-Chung
2014-01-01
A mixed basis approach based on density functional theory is employed for low dimensional systems. The basis functions are taken to be plane waves for the periodic direction multiplied by B-spline polynomials in the non-periodic direction. B-splines have the following advantages:(1) the associated matrix elements are sparse, (2) B-splines possess a superior treatment of derivatives, (3) B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With this mixed basis set we can directly calculate the total energy of the system instead of using the conventional supercell model with a slab sandwiched between vacuum regions. A generalized Lanczos-Krylov iterative method is implemented for the diagonalization of the Hamiltonian matrix. To demonstrate the present approach, we apply it to study the C(001)-(2x1) surface with the norm-conserving pseudopotential, the n-type delta-doped graphene, and graphene nanoribbon with Vanderbilt...
Fatigue crack growth monitoring of idealized gearbox spline component using acoustic emission
Zhang, Lu; Ozevin, Didem; Hardman, William; Kessler, Seth; Timmons, Alan
2016-04-01
The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. The acoustic emission (AE) method is a direct way of detecting active flaws; however, the method suffers from the influence of background noise and location/sensor based pattern recognition method. It is important to identify the source mechanism and adapt it to different test conditions and sensors. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method in a laboratory environment. The test sample has the major details of the spline component on a flattened geometry. The AE data is continuously collected together with strain gauges strategically positions on the structure. The fatigue test characteristics are 4 Hz frequency and 0.1 as the ratio of minimum to maximum loading in tensile regime. It is observed that there are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The frequency spectra of continuous emissions and burst emissions are compared to understand the difference of sudden crack growth and gradual crack growth. The predicted crack growth rate is compared with the AE data using the cumulative AE events at the notch tip. The source mechanism of sudden crack growth is obtained solving the inverse mathematical problem from output signal to input signal. The spline component of gearbox structure is a non-redundant element that requires early detection of flaws for preventing catastrophic failures. In this paper, the fatigue crack growth of a notched and flattened gearbox spline component is monitored using the AE method The AE data is continuously collected together with strain gauges. There are significant amount of continuous emissions released from the notch tip due to the formation of plastic deformation and slow crack growth. The source mechanism of
Bayesian seismic AVO inversion
Energy Technology Data Exchange (ETDEWEB)
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
Exact quantum Bayesian rule for qubit measurements in circuit QED
Feng, Wei; Liang, Pengfei; Qin, Lupei; Li, Xin-Qi
2016-02-01
Developing efficient framework for quantum measurements is of essential importance to quantum science and technology. In this work, for the important superconducting circuit-QED setup, we present a rigorous and analytic solution for the effective quantum trajectory equation (QTE) after polaron transformation and converted to the form of Stratonovich calculus. We find that the solution is a generalization of the elegant quantum Bayesian approach developed in arXiv:1111.4016 by Korotokov and currently applied to circuit-QED measurements. The new result improves both the diagonal and off-diagonal elements of the qubit density matrix, via amending the distribution probabilities of the output currents and several important phase factors. Compared to numerical integration of the QTE, the resultant quantum Bayesian rule promises higher efficiency to update the measured state, and allows more efficient and analytical studies for some interesting problems such as quantum weak values, past quantum state, and quantum state smoothing. The method of this work opens also a new way to obtain quantum Bayesian formulas for other systems and in more complicated cases.
Smooth muscle strips for intestinal tissue engineering.
Directory of Open Access Journals (Sweden)
Christopher M Walthers
Full Text Available Functionally contracting smooth muscle is an essential part of the engineered intestine that has not been replicated in vitro. The purpose of this study is to produce contracting smooth muscle in culture by maintaining the native smooth muscle organization. We employed intact smooth muscle strips and compared them to dissociated smooth muscle cells in culture for 14 days. Cells isolated by enzymatic digestion quickly lost maturity markers for smooth muscle cells and contained few enteric neural and glial cells. Cultured smooth muscle strips exhibited periodic contraction and maintained neural and glial markers. Smooth muscle strips cultured for 14 days also exhibited regular fluctuation of intracellular calcium, whereas cultured smooth muscle cells did not. After implantation in omentum for 14 days on polycaprolactone scaffolds, smooth muscle strip constructs expressed high levels of smooth muscle maturity markers as well as enteric neural and glial cells. Intact smooth muscle strips may be a useful component for engineered intestinal smooth muscle.
Probability biases as Bayesian inference
Directory of Open Access Journals (Sweden)
Andre; C. R. Martins
2006-11-01
Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.
Bayesian Methods and Universal Darwinism
Campbell, John
2010-01-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...
Bayesian methods for proteomic biomarker development
Directory of Open Access Journals (Sweden)
Belinda Hernández
2015-12-01
In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.
Bayesian test and Kuhn's paradigm
Institute of Scientific and Technical Information of China (English)
Chen Xiaoping
2006-01-01
Kuhn's theory of paradigm reveals a pattern of scientific progress,in which normal science alternates with scientific revolution.But Kuhn underrated too much the function of scientific test in his pattern,because he focuses all his attention on the hypothetico-deductive schema instead of Bayesian schema.This paper employs Bayesian schema to re-examine Kuhn's theory of paradigm,to uncover its logical and rational components,and to illustrate the tensional structure of logic and belief,rationality and irrationality,in the process of scientific revolution.
3D Bayesian contextual classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
2000-01-01
We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
Bayesian networks and food security - An introduction
Stein, A.
2004-01-01
This paper gives an introduction to Bayesian networks. Networks are defined and put into a Bayesian context. Directed acyclical graphs play a crucial role here. Two simple examples from food security are addressed. Possible uses of Bayesian networks for implementation and further use in decision sup
Bayesian variable order Markov models: Towards Bayesian predictive state representations
C. Dimitrakakis
2009-01-01
We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more st
Bayesian estimation of dynamic matching function for U-V analysis in Japan
Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro
2012-05-01
In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.
Smooth quantum gravity: Exotic smoothness and Quantum gravity
Asselmeyer-Maluga, Torsten
2016-01-01
Over the last two decades, many unexpected relations between exotic smoothness, e.g. exotic $\\mathbb{R}^{4}$, and quantum field theory were found. Some of these relations are rooted in a relation to superstring theory and quantum gravity. Therefore one would expect that exotic smoothness is directly related to the quantization of general relativity. In this article we will support this conjecture and develop a new approach to quantum gravity called \\emph{smooth quantum gravity} by using smooth 4-manifolds with an exotic smoothness structure. In particular we discuss the appearance of a wildly embedded 3-manifold which we identify with a quantum state. Furthermore, we analyze this quantum state by using foliation theory and relate it to an element in an operator algebra. Then we describe a set of geometric, non-commutative operators, the skein algebra, which can be used to determine the geometry of a 3-manifold. This operator algebra can be understood as a deformation quantization of the classical Poisson alge...
Optimization of Bayesian Emission tomographic reconstruction for region of interest quantitation
Energy Technology Data Exchange (ETDEWEB)
Qi, Jinyi
2003-01-10
Region of interest (ROI) quantitation is an important task in emission tomography (e.g., positron emission tomography and single photon emission computed tomography). It is essential for exploring clinical factors such as tumor activity, growth rate, and the efficacy of therapeutic interventions. Bayesian methods based on the maximum a posteriori principle (or called penalized maximum likelihood methods) have been developed for emission image reconstructions to deal with the low signal to noise ratio of the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the smoothing parameter of the image prior in Bayesian reconstruction controls the resolution and noise trade-off and hence affects ROI quantitation. In this paper we present an approach for choosing the optimum smoothing parameter in Bayesian reconstruction for ROI quantitation. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Building on the recent progress on deriving the approximate expressions for the local impulse response function and the covariance matrix, we derived simplied theoretical expressions for the bias, the variance, and the ensemble mean squared error (EMSE) of the ROI quantitation. One problem in evaluating ROI quantitation is that the truth is often required for calculating the bias. This is overcome by using ensemble distribution of the activity inside the ROI and computing the average EMSE. The resulting expressions allow fast evaluation of the image quality for different smoothing parameters. The optimum smoothing parameter of the image prior can then be selected to minimize the EMSE.
Optimization of Bayesian Emission tomographic reconstruction for region of interest quantitation
International Nuclear Information System (INIS)
Region of interest (ROI) quantitation is an important task in emission tomography (e.g., positron emission tomography and single photon emission computed tomography). It is essential for exploring clinical factors such as tumor activity, growth rate, and the efficacy of therapeutic interventions. Bayesian methods based on the maximum a posteriori principle (or called penalized maximum likelihood methods) have been developed for emission image reconstructions to deal with the low signal to noise ratio of the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the smoothing parameter of the image prior in Bayesian reconstruction controls the resolution and noise trade-off and hence affects ROI quantitation. In this paper we present an approach for choosing the optimum smoothing parameter in Bayesian reconstruction for ROI quantitation. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Building on the recent progress on deriving the approximate expressions for the local impulse response function and the covariance matrix, we derived simplied theoretical expressions for the bias, the variance, and the ensemble mean squared error (EMSE) of the ROI quantitation. One problem in evaluating ROI quantitation is that the truth is often required for calculating the bias. This is overcome by using ensemble distribution of the activity inside the ROI and computing the average EMSE. The resulting expressions allow fast evaluation of the image quality for different smoothing parameters. The optimum smoothing parameter of the image prior can then be selected to minimize the EMSE
Determinants of Low Birth Weight in Malawi: Bayesian Geo-Additive Modelling
Ngwira, Alfred; Stanley, Christopher C.
2015-01-01
Studies on factors of low birth weight in Malawi have neglected the flexible approach of using smooth functions for some covariates in models. Such flexible approach reveals detailed relationship of covariates with the response. The study aimed at investigating risk factors of low birth weight in Malawi by assuming a flexible approach for continuous covariates and geographical random effect. A Bayesian geo-additive model for birth weight in kilograms and size of the child at birth (less than ...
Smooth Wilson loops in N=4 non-chiral superspace
Beisert, Niklas; Müller, Dennis; Plefka, Jan; Vergu, Cristian
2015-12-01
We consider a supersymmetric Wilson loop operator for 4d N = 4 super Yang-Mills theory which is the natural object dual to the AdS 5 × S 5 superstring in the AdS/CFT correspondence. It generalizes the traditional bosonic 1 /2 BPS Maldacena-Wilson loop operator and completes recent constructions in the literature to smooth (non-light-like) loops in the full N=4 non-chiral superspace. This Wilson loop operator enjoys global super-conformal and local kappa-symmetry of which a detailed discussion is given. Moreover, the finiteness of its vacuum expectation value is proven at leading order in perturbation theory. We determine the leading vacuum expectation value for general paths both at the component field level up to quartic order in anti-commuting coordinates and in the full non-chiral superspace in suitable gauges. Finally, we discuss loops built from quadric splines joined in such a way that the path derivatives are continuous at the intersection.
Smooth Wilson Loops in N=4 Non-Chiral Superspace
Beisert, Niklas; Plefka, Jan; Vergu, Cristian
2015-01-01
We consider a supersymmetric Wilson loop operator for 4d N=4 super Yang-Mills theory which is the natural object dual to the AdS_5 x S^5 superstring in the AdS/CFT correspondence. It generalizes the traditional bosonic 1/2 BPS Maldacena-Wilson loop operator and completes recent constructions in the literature to smooth (non-light-like) loops in the full N=4 non-chiral superspace. This Wilson loop operator enjoys global superconformal and local kappa-symmetry of which a detailed discussion is given. Moreover, the finiteness of its vacuum expectation value is proven at leading order in perturbation theory. We determine the leading vacuum expectation value for general paths both at the component field level up to quartic order in anti-commuting coordinates and in the full non-chiral superspace in suitable gauges. Finally, we discuss loops built from quadric splines joined in such a way that the path derivatives are continuous at the intersection.
Bayesian Analysis of Experimental Data
Directory of Open Access Journals (Sweden)
Lalmohan Bhar
2013-10-01
Full Text Available Analysis of experimental data from Bayesian point of view has been considered. Appropriate methodology has been developed for application into designed experiments. Normal-Gamma distribution has been considered for prior distribution. Developed methodology has been applied to real experimental data taken from long term fertilizer experiments.
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis Linda
2006-01-01
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in...
ANALYSIS OF BAYESIAN CLASSIFIER ACCURACY
Directory of Open Access Journals (Sweden)
Felipe Schneider Costa
2013-01-01
Full Text Available The naÃ¯ve Bayes classifier is considered one of the most effective classification algorithms today, competing with more modern and sophisticated classifiers. Despite being based on unrealistic (naÃ¯ve assumption that all variables are independent, given the output class, the classifier provides proper results. However, depending on the scenario utilized (network structure, number of samples or training cases, number of variables, the network may not provide appropriate results. This study uses a process variable selection, using the chi-squared test to verify the existence of dependence between variables in the data model in order to identify the reasons which prevent a Bayesian network to provide good performance. A detailed analysis of the data is also proposed, unlike other existing work, as well as adjustments in case of limit values between two adjacent classes. Furthermore, variable weights are used in the calculation of a posteriori probabilities, calculated with mutual information function. Tests were applied in both a naÃ¯ve Bayesian network and a hierarchical Bayesian network. After testing, a significant reduction in error rate has been observed. The naÃ¯ve Bayesian network presented a drop in error rates from twenty five percent to five percent, considering the initial results of the classification process. In the hierarchical network, there was not only a drop in fifteen percent error rate, but also the final result came to zero.
Bayesian Agglomerative Clustering with Coalescents
Teh, Yee Whye; Daumé III, Hal; Roy, Daniel
2009-01-01
We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman's coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over others, and demonstrate our approach in document clustering and phylolinguistics.
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Intensity Conserving Spline Interpolation (ICSI): A New Tool for Spectroscopic Analysis
Klimchuk, James A; Tripathi, Durgesh
2015-01-01
The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure called Intensity Conserving Spline Interpolation (ICSI) that corrects for this effect. As its name implies, it conserves the observed intensity within each wavelength...
Energy Technology Data Exchange (ETDEWEB)
Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu [Iowa State University, Center for Nondestructive Evaluation, 1915 Scholl Road, Ames, IA 50011 (United States)
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
Directory of Open Access Journals (Sweden)
Wang Ping
2016-01-01
Full Text Available A new cold rotary forging technology of the internal helical involute spline was presented based on an analysis of the structure of automotive starter guide cylinder. 3D rigid-plastic finite element model was employed. Billet deformation, Billet equivalent stress and forming load were investigated under the DEFORM 3D software environment, then the forming process parameters were applied in the forming trials, and the simulation results are conformed with the experimental results. The validity of 3D finite element simulation model has been verified. The research results show that the proposed cold rotary forging technology can be efficient in handling of the forming manufacturing problems of automobile starter guide cylinder with internal helical involute spline.
Convergence of a Fourier-spline representation for the full-turn map generator
International Nuclear Information System (INIS)
Single-turn data from a symplectic tracking code can be used to construct a canonical generator for a full-turn symplectic map. This construction has been carried out numerically in canonical polar coordinates, the generator being obtained as a Fourier series in angle coordinates with coefficients that are spline functions of action coordinates. Here the authors provide a mathematical basis for the procedure, finding sufficient conditions for the existence of the generator and convergence of the Fourier-spline expansion. The analysis gives insight concerning analytic properties of the generator, showing that in general there are branch points as a function of angle and inverse square root singularities at the origin as a function of action
Ship hull plate processing surface fairing with constraints based on B-spline
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The problem of ship hull plate processing surface fairing with constraints based on B-spline is solved in this paper. The algorithm for B-spline curve fairing with constraints is one of the most common methods in plane curve fairing. The algorithm can be applied to global and local curve fairing. It can constrain the perturbation range of the control points and the shape variation of the curve, and get a better fairing result in plane curves. In this paper, a new fairing algorithm with constraints for curves and surfaces in space is presented. Then this method is applied to the experiments of ship hull plate processing surface. Finally numerical results are obtained to show the efficiency of this method.
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set
Directory of Open Access Journals (Sweden)
M Barezi
2011-03-01
Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.
International Nuclear Information System (INIS)
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme
A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics
Energy Technology Data Exchange (ETDEWEB)
Johnson, Richard Wayne; Landon, Mark Dee
1999-07-01
The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.
A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics
Energy Technology Data Exchange (ETDEWEB)
M. D. Landon; R. W. Johnson
1999-07-01
The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve complex curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.
Shazalina Mat Zin; Ahmad Abd. Majid; Ahmad Izani Md. Ismail; Muhammad Abbas
2014-01-01
The generalized nonlinear Klien-Gordon equation is important in quantum mechanics and related fields. In this paper, a semi-implicit approach based on hybrid cubic B-spline is presented for the approximate solution of the nonlinear Klien-Gordon equation. The usual finite difference approach is used to discretize the time derivative while hybrid cubic B-spline is applied as an interpolating function in the space dimension. The results of applications to several test problems indicate good agre...
Regression spline bivariate probit models: a practical approach to testing for exogeneity
Marra, G.; Radice, Rosalba; Filippou, P
2015-01-01
Bivariate probit models can deal with a problem usually known as endogeneity. This issue is likely to arise in observational studies when confounders are unobserved. We are concerned with testing the hypothesis of exogeneity (or absence of endogeneity) when using regression spline recursive and sample selection bivariate probit models. Likelihood ratio and gradient tests are discussed in this context and their empirical properties investigated and compared with those of the Lagrange multiplie...
The application of cubic B-spline collocation method in impact force identification
Qiao, Baijie; Chen, Xuefeng; Xue, Xiaofeng; Luo, Xinjie; Liu, Ruonan
2015-12-01
The accurate real-time characterization of impact event is vital during the life-time of a mechanical product. However, the identified impact force may seriously diverge from the real one due to the unknown noise contaminating the measured data, as well as the ill-conditioned system matrix. In this paper, a regularized cubic B-spline collocation (CBSC) method is developed for identifying the impact force time history, which overcomes the deficiency of the ill-posed problem. The cubic B-spline function by controlling the mesh size of the collocation point has the profile of a typical impact event. The unknown impact force is approximated by a set of translated cubic B-spline functions and then the original governing equation of force identification is reduced to find the coefficient of the basis function at each collocation point. Moreover, a modified regularization parameter selection criterion derived from the generalized cross validation (GCV) criterion for the truncated singular value decomposition (TSVD) is introduced for the CBSC method to determine the optimum regularization number of cubic B-spline functions. In the numerical simulation of a two degrees-of-freedom (DOF) system, the regularized CBSC method is validated under different noise levels and frequency bands of exciting forces. Finally, an impact experiment is performed on a clamped-free shell structure to confirm the performance of the regularized CBSC method. Experimental results demonstrate that the peak relative errors of impact forces based on the regularized CBSC method are below 8%, while those based on the TSVD method are approximately 30%.
An efficient active B-spline/nurbs model for virtual sculpting
Moore, Patricia
2013-01-01
This thesis presents an Efficient Active B-Spline/Nurbs Model for Virtual Sculpting. In spite of the on-going rapid development of computer graphics and computer-aided design tools, 3D graphics designers still rely on non-intuitive modelling procedures for the creation and manipulation of freeform virtual content. The ’Virtual Sculpting' paradigm is a well-established mechanism for shielding designers from the complex mathematics that underpin freeform shape design. The premise is to emulate ...
A Hybrid Spline Metamodel for Photovoltaic/Wind/Battery Energy Systems
Zaibi, Malek; LAYADI, Toufik Madani; Champenois, Gérard; Roboam, Xavier; Sareni, Bruno; Belhadj, Jamel
2015-01-01
This paper proposes a metamodel design for a Photovoltaic/Wind/Battery Energy System. The modeling of a hybrid PV/wind generator coupled with two kinds of storage i.e. electric (battery) and hydraulic (tanks) devices is investigated. A metamodel is carried out by hybrid spline interpolation to solve the relationships between several design variables i.e. the design parameters of different subsystems and their associate response variables i.e. system indicators performance. The developed model...
Grajeda, LM; Ivanescu, A; Saito, M; Crainiceanu, C; Jaganath, D; Gilman, RH; Crabtree, JE; Kelleher, D; Cabrera, L.; Cama, V; Checkley, W
2016-01-01
Background Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and accelerati...
Electron scattering from krypton: High-resolution experiments and B-spline R-matrix calculations
International Nuclear Information System (INIS)
In a joint experimental and theoretical effort, we carried out a detailed study of e-Kr collisions. For elastic scattering and excitation of the 4p55s states, we present total and angle-differential cross sections over the entire angular range (0°-180°) for a number of energies, as well as energy scans for selected angles. The experimental results are in very satisfactory agreement with predictions from fully relativistic Dirac B-spline R-matrix models.
The Numerical Approach to the Fisher's Equation via Trigonometric Cubic B-spline Collocation Method
Ersoy, Ozlem; Dag, Idris
2016-01-01
In this study, we set up a numerical technique to get approximate solutions of Fisher's equation which is one of the most important model equation in population biology. We integrate the equation fully by using combination of the trigonometric cubic B-spline functions for space variable and Crank-Nicolson for the time integration. Numerical results have been presented to show the accuracy of the current algorithm. We have seen that the proposed technique is a good alternative to some existing...
A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression
Li, Chin-Shang; Tu, Wanzhu
2007-01-01
In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the pot...
Fast Visualization by Shear-Warp using Spline Models for Data Reconstruction
Schlosser, Gregor
2009-01-01
This work concerns oneself with the rendering of huge three-dimensional data sets. The target thereby is the development of fast algorithms by also applying recent and accurate volume reconstruction models to obtain at most artifact-free data visualizations. In part I a comprehensive overview on the state of the art in volume rendering is given. Part II is devoted to the recently developed trivariate (linear,) quadratic and cubic spline models defined on symmetric tetrahedral partitions direc...
Radar track segmentation with cubic splines for collision risk models in high density terminal areas
Cózar, J.; Saez Nieto, Francisco Javier; Ricaud Álvarez, Enrique
2014-01-01
This paper presents a method to segment airplane radar tracks in high density terminal areas where the air traffic follows trajectories with several changes in heading, speed and altitude. The radar tracks are modelled with different types of segments, straight lines, cubic spline function and shape preserving cubic function. The longitudinal, lateral and vertical deviations are calculated for terminal manoeuvring area scenarios. The most promising model of the radar tracks resulted from a mi...
Quadratic spline collocation and parareal deferred correction method for parabolic PDEs
Liu, Jun; Wang, Yan; Li, Rongjian
2016-06-01
In this paper, we consider a linear parabolic PDE, and use optimal quadratic spline collocation (QSC) methods for the space discretization, proceed the parareal technique on the time domain. Meanwhile, deferred correction technique is used to improve the accuracy during the iterations. The error estimation is presented and the stability is analyzed. Numerical experiments, which is carried out on a parallel computer with 40 CPUs, are attached to exhibit the effectiveness of the hybrid algorithm.
A fourth order spline collocation approach for a business cycle model
Sayfy, A.; Khoury, S.; Ibdah, H.
2013-10-01
A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.
Directory of Open Access Journals (Sweden)
Lakestani M.
2005-01-01
Full Text Available Compactly supported linear semiorthogonal B-spline wavelets together with their dual wavelets are developed to approximate the solutions of nonlinear Fredholm-Hammerstein integral equations. Properties of these wavelets are first presented; these properties are then utilized to reduce the computation of integral equations to some algebraic equations. The method is computationally attractive, and applications are demonstrated through an illustrative example.
Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence
Ait-Haddou, Rachid
2015-06-19
We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention of any numerical solver and the rule is optimal, that is, it requires minimal number of nodes. Numerical experiments validating the theoretical results and the error estimates of the quadrature rules are also presented.
Mahapatra, Pravas R; Makkapati, Vishnu V
2005-01-01
Enhancements are carried out to a contour-based method for extreme compression of weather radar reflectivity data for efficient storage and transmission over low-bandwidth data links. In particular, a new method of systematically adjusting the control points to obtain better reconstruction of the contours using B-Spline interpolation is presented. Further, bit-level manipulations to achieve higher compression ratios are investigated. The efficacy of these enhancements is quantitatively eva...
Bayesian analysis of rare events
Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang
2016-06-01
In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.
Bayesian methods for measures of agreement
Broemeling, Lyle D
2009-01-01
Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...
Plug & Play object oriented Bayesian networks
DEFF Research Database (Denmark)
Bangsø, Olav; Flores, J.; Jensen, Finn Verner
2003-01-01
Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have...... been shown to be quite suitable for dynamic domains as well. However, processing object oriented Bayesian networks in practice does not take advantage of their modular structure. Normally the object oriented Bayesian network is transformed into a Bayesian network and, inference is performed...... by constructing a junction tree from this network. In this paper we propose a method for translating directly from object oriented Bayesian networks to junction trees, avoiding the intermediate translation. We pursue two main purposes: firstly, to maintain the original structure organized in an instance tree...
Kernel smoothing in Matlab theory and practice of kernel smoothing
Horova, Ivanka; Zelinka, Jiri
2012-01-01
Methods of kernel estimates represent one of the most effective nonparametric smoothing techniques. These methods are simple to understand and they possess very good statistical properties. This book provides a concise and comprehensive overview of statistical theory and in addition, emphasis is given to the implementation of presented methods in Matlab. All created programs are included in a special toolbox which is an integral part of the book. This toolbox contains many Matlab scripts useful for kernel smoothing of density, cumulative distribution function, regression function, hazard funct
A mixed basis density functional approach for one-dimensional systems with B-splines
Ren, Chung-Yuan; Chang, Yia-Chung; Hsue, Chen-Shiung
2016-05-01
A mixed basis approach based on density functional theory is extended to one-dimensional (1D) systems. The basis functions here are taken to be the localized B-splines for the two finite non-periodic dimensions and the plane waves for the third periodic direction. This approach will significantly reduce the number of the basis and therefore is computationally efficient for the diagonalization of the Kohn-Sham Hamiltonian. For 1D systems, B-spline polynomials are particularly useful and efficient in two-dimensional spatial integrations involved in the calculations because of their absolute localization. Moreover, B-splines are not associated with atomic positions when the geometry structure is optimized, making the geometry optimization easy to implement. With such a basis set we can directly calculate the total energy of the isolated system instead of using the conventional supercell model with artificial vacuum regions among the replicas along the two non-periodic directions. The spurious Coulomb interaction between the charged defect and its repeated images by the supercell approach for charged systems can also be avoided. A rigorous formalism for the long-range Coulomb potential of both neutral and charged 1D systems under the mixed basis scheme will be derived. To test the present method, we apply it to study the infinite carbon-dimer chain, graphene nanoribbon, carbon nanotube and positively-charged carbon-dimer chain. The resulting electronic structures are presented and discussed in detail.
Flutter Instability Speeds of Guided Splined Disks: An Experimental and Analytical Investigation
Directory of Open Access Journals (Sweden)
Ahmad Mohammadpanah
2015-01-01
Full Text Available “Guided splined disks” are defined as flat thin disks in which the inner radius of the disk is splined and matches a splined arbor that provides the driving torque for rotating the disk. Lateral constraint for the disk is provided by space fixed guide pads. Experimental lateral displacement of run-up tests of such a system is presented, and the flutter instability zones are identified. The results indicate that flutter instability occurs at speeds when a backward travelling wave of a mode meets a reflected wave of a different mode. Sometimes, the system cannot pass a flutter zone, and transverse vibrations of the disk lock into that flutter instability zone. The governing linear equations of transverse motion of such a spinning disk, with assumed free inner and outer boundary conditions, are derived. A lateral constraint is introduced and modeled as a linear spring. Rigid body translational and tilting degrees of freedom are included in the analysis of the total motion of the spinning disk. The eigenvalues of the system are computed numerically, and the flutter instability zones are defined. The results show that the mathematical model can predict accurately the flutter instability zones measured in the experimental tests.
An optimal discrete operator for the two-dimensional spline filter
International Nuclear Information System (INIS)
Digital filtering techniques are indispensable tools for analyzing and evaluating surface topography data. Among the conventional digital filters, the Gaussian filter is the most commonly used filtering technique for both one-dimensional and two-dimensional data. This is because of isotropic and zero-phase transmission characteristics. However, in the filtering process with the Gaussian filter, additional run-in and run-out regions are usually needed due to its large end-effects. To overcome this disadvantage that supplementary profile data are needed to reduce the end-effects, the one-dimensional spline filter was introduced. At present, it is widely accepted as a practical filtering technique and published as ISO/TC16610-22. In fact, a successive application of the one-dimensional spline filter to the two-dimensional data in the orthogonal directions may lead to an anisotropic amplitude characteristic. In this paper, a purely two-dimensional discrete spline filter is proposed and its computational procedure is also described, which is able to approximate the isotropic frequency response in an ideal manner through a least-squares optimization technique
Converting an unstructured quadrilateral mesh to a standard T-spline surface
Wang, Wenyan; Zhang, Yongjie; Scott, Michael A.; Hughes, Thomas J. R.
2011-10-01
This paper presents a novel method for converting any unstructured quadrilateral mesh to a standard T-spline surface, which is C 2-continuous except for the local region around each extraordinary node. There are two stages in the algorithm: the topology stage and the geometry stage. In the topology stage, we take the input quadrilateral mesh as the initial T-mesh, design templates for each quadrilateral element type, and then standardize the T-mesh by inserting nodes. One of two sufficient conditions is derived to guarantee the generated T-mesh is gap-free around extraordinary nodes. To obtain a standard T-mesh, a second sufficient condition is provided to decide what T-mesh configuration yields a standard T-spline. These two sufficient conditions serve as a theoretical basis for our template development and T-mesh standardization. In the geometry stage, an efficient surface fitting technique is developed to improve the geometric accuracy. In addition, the surface continuity around extraordinary nodes can be improved by adjusting surrounding control nodes. The algorithm can also preserve sharp features in the input mesh, which are common in CAD (Computer Aided Design) models. Finally, a Bézier extraction technique is used to facilitate T-spline based isogeometric analysis. Several examples are tested to show the robustness of the algorithm.
Design of Low-Pass Digital Differentiators Based on B-splines
Directory of Open Access Journals (Sweden)
Zijun He
2014-07-01
Full Text Available This paper describes a new method for designing low-pass differentiators that could be widely suitable for low-frequency signals with different sampling rates. The method is based on the differential property of convolution and the derivatives of B-spline bias functions. The first order differentiator is just constructed by the first derivative of the B-spline of degree 5 or 4. A high (>2 order low-pass differentiator is constructed by cascading two low order differentiators, of which the coefficients are obtained from the nth derivative of a B-spline of degree n+2 expanded by factor a. In this paper, the properties of the proposed differentiators were presented. In addition, we gave the examples of designing the first to sixth order differentiators, and several simulations, including the effects of the factor a on the results and the anti-noise capability of the proposed differentiators. These properties analysis and simulations indicate that the proposed differentiator can be applied to a wide range of low-frequency signals, and the trade-off between noise- reduction and signal preservation can be made by selecting the maximum allowable value of a.
Directory of Open Access Journals (Sweden)
Geetha M
2012-03-01
Full Text Available Sign language is the most natural way of expression for the deaf community. The urge to support the integration of deaf people into the hearing society made the automatic sign language recognition, an area of interest for the researchers. Indian Sign Language (ISL is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. In this paper we propose a novel vision-based recognition of Indian Sign Language Alphabets and Numerals using B-Spline Approximation. Gestures of ISL alphabets are complex since it involves the gestures of both the hands together. Our algorithm approximates the boundary extracted from the Region of Interest, to a B-Spline curve by taking the Maximum Curvature Points (MCPs as the Control points. Then the B-Spline curve is subjected to iterations for smoothening resulting in the extraction of Key Maximum Curvature points (KMCPs, which are the key contributors of the gesture shape. Hence a translation & scale invariant feature vector is obtained from the spatial locations of the KMCPs in the 8 Octant Regions of the 2D Space which isgiven for classification.
Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods
Zhu, Weixuan
2016-01-01
The definition of vectors of dependent random probability measures is a topic of interest in Bayesian nonparametrics. They represent dependent nonparametric prior distributions that are useful for modelling observables for which specific covariate values are known. Our first contribution is the introduction of novel multivariate vectors of two-parameter Poisson-Dirichlet process. The dependence is induced by applying a L´evy copula to the marginal L´evy intensities. Our attenti...
7 CFR 51.636 - Smooth texture.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Smooth texture. 51.636 Section 51.636 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing...) Definitions § 51.636 Smooth texture. Smooth texture means that the skin is thin and smooth for the variety...
7 CFR 51.698 - Smooth texture.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Smooth texture. 51.698 Section 51.698 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... § 51.698 Smooth texture. Smooth texture means that the skin is thin and smooth for the variety and...
Sarkar, Abhra
2014-10-02
We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.
Non-Parametric Bayesian Registration (NParBR) of Body Tumors in DCE-MRI Data.
Pilutti, David; Strumia, Maddalena; Buchert, Martin; Hadjidemetriou, Stathis
2016-04-01
The identification of tumors in the internal organs of chest, abdomen, and pelvis anatomic regions can be performed with the analysis of Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) data. The contrast agent is accumulated differently by pathologic and healthy tissues and that results in a temporally varying contrast in an image series. The internal organs are also subject to potentially extensive movements mainly due to breathing, heart beat, and peristalsis. This contributes to making the analysis of DCE-MRI datasets challenging as well as time consuming. To address this problem we propose a novel pairwise non-rigid registration method with a Non-Parametric Bayesian Registration (NParBR) formulation. The NParBR method uses a Bayesian formulation that assumes a model for the effect of the distortion on the joint intensity statistics, a non-parametric prior for the restored statistics, and also applies a spatial regularization for the estimated registration with Gaussian filtering. A minimally biased intra-dataset atlas is computed for each dataset and used as reference for the registration of the time series. The time series registration method has been tested with 20 datasets of liver, lungs, intestines, and prostate. It has been compared to the B-Splines and to the SyN methods with results that demonstrate that the proposed method improves both accuracy and efficiency. PMID:26672032
Sinha, Samiran
2009-08-10
We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.
Smooth maps from clumpy data: Covariance analysis
Lombardi, Marco; Schneider, Peter
2002-01-01
Interpolation techniques play a central role in Astronomy, where one often needs to smooth irregularly sampled data into a smooth map. In a previous article (Lombardi & Schneider 2001), we have considered a widely used smoothing technique and we have evaluated the expectation value of the smoothed map under a number of natural hypotheses. Here we proceed further on this analysis and consider the variance of the smoothed map, represented by a two-point correlation function. We show that two ma...
Speaking Stata: Smoothing in various directions
Nicholas J. Cox
2005-01-01
Identifying patterns in bivariate data on a scatterplot remains a ba- sic statistical problem, with special flavor when both variables are on the same footing. Ideas of double, diagonal, and polar smoothing inspired by Cleveland and McGill’s 1984 paper in the Journal of the American Statistical Association are revisited with various examples from environmental datasets. Double smooth- ing means smoothing both y given x and x given y. Diagonal smoothing means smoothing based on the sum and dif...
Smooth halos in the cosmic web
Gaite, Jose
2014-01-01
Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis alread...
On the analysis of movement smoothness
Balasubramanian, Sivakumar; Melendez-Calderon, Alejandro; Roby-Brami, Agnes; Burdet, Etienne
2015-01-01
Quantitative measures of smoothness play an important role in the assessment of sensorimotor impairment and motor learning. Traditionally, movement smoothness has been computed mainly for discrete movements, in particular arm, reaching and circle drawing, using kinematic data. There are currently very few studies investigating smoothness of rhythmic movements, and there is no systematic way of analysing the smoothness of such movements. There is also very little work on the smoothness of othe...
Bayesian approach to rough set
Marwala, Tshilidzi
2007-01-01
This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.
Attention in a bayesian framework
DEFF Research Database (Denmark)
Whiteley, Louise Emma; Sahani, Maneesh
2012-01-01
include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...... settings, where cues shape expectations about a small number of upcoming stimuli and thus convey "prior" information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its......The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models of...
Bayesian Sampling using Condition Indicators
DEFF Research Database (Denmark)
Faber, Michael H.; Sørensen, John Dalsgaard
2002-01-01
allows for a Bayesian formulation of the indicators whereby the experience and expertise of the inspection personnel may be fully utilized and consistently updated as frequentistic information is collected. The approach is illustrated on an example considering a concrete structure subject to corrosion......The problem of control quality of components is considered for the special case where the acceptable failure rate is low, the test costs are high and where it may be difficult or impossible to test the condition of interest directly. Based on the classical control theory and the concept of...... condition indicators introduced by Benjamin and Cornell (1970) a Bayesian approach to quality control is formulated. The formulation is then extended to the case where the quality control is based on sampling of indirect information about the condition of the components, i.e. condition indicators. This...
BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS
Directory of Open Access Journals (Sweden)
Thordis Linda Thorarinsdottir
2011-05-01
Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.
Bayesian Seismology of the Sun
Gruberbauer, Michael
2013-01-01
We perform a Bayesian grid-based analysis of the solar l=0,1,2 and 3 p modes obtained via BiSON in order to deliver the first Bayesian asteroseismic analysis of the solar composition problem. We do not find decisive evidence to prefer either of the contending chemical compositions, although the revised solar abundances (AGSS09) are more probable in general. We do find indications for systematic problems in standard stellar evolution models, unrelated to the consequences of inadequate modelling of the outer layers on the higher-order modes. The seismic observables are best fit by solar models that are several hundred million years older than the meteoritic age of the Sun. Similarly, meteoritic age calibrated models do not adequately reproduce the observed seismic observables. Our results suggest that these problems will affect any asteroseismic inference that relies on a calibration to the Sun.
Bayesian priors for transiting planets
Kipping, David M
2016-01-01
As astronomers push towards discovering ever-smaller transiting planets, it is increasingly common to deal with low signal-to-noise ratio (SNR) events, where the choice of priors plays an influential role in Bayesian inference. In the analysis of exoplanet data, the selection of priors is often treated as a nuisance, with observers typically defaulting to uninformative distributions. Such treatments miss a key strength of the Bayesian framework, especially in the low SNR regime, where even weak a priori information is valuable. When estimating the parameters of a low-SNR transit, two key pieces of information are known: (i) the planet has the correct geometric alignment to transit and (ii) the transit event exhibits sufficient signal-to-noise to have been detected. These represent two forms of observational bias. Accordingly, when fitting transits, the model parameter priors should not follow the intrinsic distributions of said terms, but rather those of both the intrinsic distributions and the observational ...
Bayesian Inference for Radio Observations
Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin
2015-01-01
(Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...
Bayesian inference on proportional elections.
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Smooth deformations and cosmic statefinders
Capistrano, A J S
2014-01-01
We study the possibility that the universe is subjected to a deformation, besides its expansion described by Friedmann's equations. The concept of smooth deformation of a Riemannian manifolds associated with the extrinsic curvature is applied the standard FLRWcosmology. The resulting modified Friedman's equation whose solution is compared with the known phenomenological data.
A Bayesian Nonparametric IRT Model
Karabatsos, George
2015-01-01
This paper introduces a flexible Bayesian nonparametric Item Response Theory (IRT) model, which applies to dichotomous or polytomous item responses, and which can apply to either unidimensional or multidimensional scaling. This is an infinite-mixture IRT model, with person ability and item difficulty parameters, and with a random intercept parameter that is assigned a mixing distribution, with mixing weights a probit function of other person and item parameters. As a result of its flexibility...
Bayesian segmentation of hyperspectral images
Mohammadpour, Adel; Mohammad-Djafari, Ali
2007-01-01
In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.
Bayesian segmentation of hyperspectral images
Mohammadpour, Adel; Féron, Olivier; Mohammad-Djafari, Ali
2004-11-01
In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.
Bayesian Stable Isotope Mixing Models
Parnell, Andrew C.; Phillips, Donald L.; Bearhop, Stuart; Semmens, Brice X.; Ward, Eric J.; Moore, Jonathan W.; Andrew L Jackson; Inger, Richard
2012-01-01
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixture. The most widely used application is quantifying the diet of organisms based on the food sources they have been observed to consume. At the centre of the multivariate statistical model we propose is a compositional m...
Bayesian Network--Response Regression
WANG, LU; Durante, Daniele; Dunson, David B.
2016-01-01
There is an increasing interest in learning how human brain networks vary with continuous traits (e.g., personality, cognitive abilities, neurological disorders), but flexible procedures to accomplish this goal are limited. We develop a Bayesian semiparametric model, which combines low-rank factorizations and Gaussian process priors to allow flexible shifts of the conditional expectation for a network-valued random variable across the feature space, while including subject-specific random eff...
Bayesian estimation of turbulent motion
Héas, P.; Herzet, C.; Mémin, E.; Heitz, D.; P. D. Mininni
2013-01-01
International audience Based on physical laws describing the multi-scale structure of turbulent flows, this article proposes a regularizer for fluid motion estimation from an image sequence. Regularization is achieved by imposing some scale invariance property between histograms of motion increments computed at different scales. By reformulating this problem from a Bayesian perspective, an algorithm is proposed to jointly estimate motion, regularization hyper-parameters, and to select the ...
Elements of Bayesian experimental design
Energy Technology Data Exchange (ETDEWEB)
Sivia, D.S. [Rutherford Appleton Lab., Oxon (United Kingdom)
1997-09-01
We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.
Skill Rating by Bayesian Inference
Di Fatta, Giuseppe; Haworth, Guy McCrossan; Regan, Kenneth W.
2009-01-01
Systems Engineering often involves computer modelling the behaviour of proposed systems and their components. Where a component is human, fallibility must be modelled by a stochastic agent. The identification of a model of decision-making over quantifiable options is investigated using the game-domain of Chess. Bayesian methods are used to infer the distribution of players’ skill levels from the moves they play rather than from their competitive results. The approach is used on large sets of ...
Topics in Nonparametric Bayesian Statistics
2003-01-01
The intersection set of Bayesian and nonparametric statistics was almost empty until about 1973, but now seems to be growing at a healthy rate. This chapter gives an overview of various theoretical and applied research themes inside this field, partly complementing and extending recent reviews of Dey, Müller and Sinha (1998) and Walker, Damien, Laud and Smith (1999). The intention is not to be complete or exhaustive, but rather to touch on research areas of interest, partly by example.
Cover Tree Bayesian Reinforcement Learning
Tziortziotis, Nikolaos; Dimitrakakis, Christos; Blekas, Konstantinos
2013-01-01
This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration po...
Bayesian kinematic earthquake source models
Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.
2009-12-01
Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high
Bayesian Optimization for Adaptive MCMC
Mahendran, Nimalan; Wang, Ziyu; Hamze, Firas; De Freitas, Nando
2011-01-01
This paper proposes a new randomized strategy for adaptive MCMC using Bayesian optimization. This approach applies to non-differentiable objective functions and trades off exploration and exploitation to reduce the number of potentially costly objective function evaluations. We demonstrate the strategy in the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters o...
Inference in hybrid Bayesian networks
DEFF Research Database (Denmark)
Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael;
2009-01-01
and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....
Quantile pyramids for Bayesian nonparametrics
2009-01-01
P\\'{o}lya trees fix partitions and use random probabilities in order to construct random probability measures. With quantile pyramids we instead fix probabilities and use random partitions. For nonparametric Bayesian inference we use a prior which supports piecewise linear quantile functions, based on the need to work with a finite set of partitions, yet we show that the limiting version of the prior exists. We also discuss and investigate an alternative model based on the so-called substitut...
Space Shuttle RTOS Bayesian Network
Morris, A. Terry; Beling, Peter A.
2001-01-01
With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores
Bayesian analysis of contingency tables
Gómez Villegas, Miguel A.; González Pérez, Beatriz
2005-01-01
The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first...
Bayesian Credit Ratings (new version)
Paola Cerchiello; Paolo Giudici
2013-01-01
In this contribution we aim at improving ordinal variable selection in the context of causal models. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate, and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric, and, thus, keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal ...
Numerical simulation of liquid jet breakup using smoothed particle hydrodynamics (SPH)
Pourabdian, Majid; Morad, Mohammad Reza
2016-01-01
In this paper, breakup of liquid jet is simulated using smoothed particle hydrodynamics (SPH) which is a meshless Lagrangian numerical method. For this aim, flow governing equations are discretized based on SPH method. In this paper, SPHysics open source code has been utilized for numerical solutions. Therefore, the mentioned code has been developed by adding the surface tension effects. The proposed method is then validated using dam break with obstacle problem. Finally, simulation of twodimensional liquid jet flow is carried out and its breakup behavior considering one-phase flow is investigated. Length of liquid breakup in Rayleigh regime is calculated for various flow conditions such as different Reynolds and Weber numbers and the results are validated by an experimental correlation. The whole numerical solutions are accomplished for both Wendland and cubic spline kernel functions and Wendland kernel function gave more accurate results. The results are compared to MPS method for inviscid liquid as well. T...
Cervical cancer survival prediction using hybrid of SMOTE, CART and smooth support vector machine
Purnami, S. W.; Khasanah, P. M.; Sumartini, S. H.; Chosuvivatwong, V.; Sriplung, H.
2016-04-01
According to the WHO, every two minutes there is one patient who died from cervical cancer. The high mortality rate is due to the lack of awareness of women for early detection. There are several factors that supposedly influence the survival of cervical cancer patients, including age, anemia status, stage, type of treatment, complications and secondary disease. This study wants to classify/predict cervical cancer survival based on those factors. Various classifications methods: classification and regression tree (CART), smooth support vector machine (SSVM), three order spline SSVM (TSSVM) were used. Since the data of cervical cancer are imbalanced, synthetic minority oversampling technique (SMOTE) is used for handling imbalanced dataset. Performances of these methods are evaluated using accuracy, sensitivity and specificity. Results of this study show that balancing data using SMOTE as preprocessing can improve performance of classification. The SMOTE-SSVM method provided better result than SMOTE-TSSVM and SMOTE-CART.
Bayesian second law of thermodynamics
Bartolotta, Anthony; Carroll, Sean M.; Leichenauer, Stefan; Pollack, Jason
2016-08-01
We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as Δ H (ρm,ρ ) + F |m≥0 , where Δ H (ρm,ρ ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm and F |m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples.
Quantum Inference on Bayesian Networks
Yoder, Theodore; Low, Guang Hao; Chuang, Isaac
2014-03-01
Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.
Texture-preserving Bayesian image reconstruction for low-dose CT
Zhang, Hao; Han, Hao; Hu, Yifan; Liu, Yan; Ma, Jianhua; Li, Lihong; Moore, William; Liang, Zhengrong
2016-03-01
Markov random field (MRF) model has been widely used in Bayesian image reconstruction to reconstruct piecewise smooth images in the presence of noise, such as in low-dose X-ray computed tomography (LdCT). While it can preserve edge sharpness via edge-preserving potential function, its regional smoothing may sacrifice tissue image textures, which have been recognized as useful imaging biomarkers, and thus it compromises clinical tasks such as differentiating malignant vs. benign lesions, e.g., lung nodule or colon polyp. This study aims to shift the edge preserving regional noise smoothing paradigm to texture-preserving framework for LdCT image reconstruction while retaining the advantage of MRF's neighborhood system on edge preservation. Specifically, we adapted the MRF model to incorporate the image textures of lung, bone, fat, muscle, etc. from previous full-dose CT scan as a priori knowledge for texture-preserving Bayesian reconstruction of current LdCT images. To show the feasibility of proposed reconstruction framework, experiments using clinical patient scans (with lung nodule or colon polyp) were conducted. The experimental outcomes showed noticeable gain by the a priori knowledge for LdCT image reconstruction with the well-known Haralick texture measures. Thus, it is conjectured that texture-preserving LdCT reconstruction has advantages over edge-preserving regional smoothing paradigm for texture-specific clinical applications.
12th Brazilian Meeting on Bayesian Statistics
Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo
2015-01-01
Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...
Bayesian Posterior Distributions Without Markov Chains
Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.
2012-01-01
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...
Bayesian networks with applications in reliability analysis
Langseth, Helge
2002-01-01
A common goal of the papers in this thesis is to propose, formalize and exemplify the use of Bayesian networks as a modelling tool in reliability analysis. The papers span work in which Bayesian networks are merely used as a modelling tool (Paper I), work where models are specially designed to utilize the inference algorithms of Bayesian networks (Paper II and Paper III), and work where the focus has been on extending the applicability of Bayesian networks to very large domains (Paper IV and ...
3D Profile Filter Algorithm Based on Parallel Generalized B-spline Approximating Gaussian
Institute of Scientific and Technical Information of China (English)
REN Zhiying; GAO Chenghui; SHEN Ding
2015-01-01
Currently, the approximation methods of the Gaussian filter by some other spline filters have been developed. However, these methods are only suitable for the study of one-dimensional filtering, when these methods are used for three-dimensional filtering, it is found that a rounding error and quantization error would be passed to the next in every part. In this paper, a new and high-precision implementation approach for Gaussian filter is described, which is suitable for three-dimensional reference filtering. Based on the theory of generalized B-spline function and the variational principle, the transmission characteristics of a digital filter can be changed through the sensitivity of the parameters (t1, t2), and which can also reduce the rounding error and quantization error by the filter in a parallel form instead of the cascade form. Finally, the approximation filter of Gaussian filter is obtained. In order to verify the feasibility of the new algorithm, the reference extraction of the conventional methods are also used and compared. The experiments are conducted on the measured optical surface, and the results show that the total calculation by the new algorithm only requires 0.07 s for 480´480 data points;the amplitude deviation between the reference of the parallel form filter and the Gaussian filter is smaller;the new method is closer to the characteristic of the Gaussian filter through the analysis of three-dimensional roughness parameters, comparing with the cascade generalized B-spline approximating Gaussian. So the new algorithm is also efficient and accurate for the implementation of Gaussian filter in the application of surface roughness measurement.
Very Smooth Points of Spaces of Operators
Indian Academy of Sciences (India)
T S S R K Rao
2003-02-01
In this paper we study very smooth points of Banach spaces with special emphasis on spaces of operators. We show that when the space of compact operators is an -ideal in the space of bounded operators, a very smooth operator attains its norm at a unique vector (up to a constant multiple) and ( ) is a very smooth point of the range space. We show that if for every equivalent norm on a Banach space, the dual unit ball has a very smooth point then the space has the Radon–Nikodým property. We give an example of a smooth Banach space without any very smooth points.
Exclusive breastfeeding practice in Nigeria: a bayesian stepwise regression analysis.
Gayawan, Ezra; Adebayo, Samson B; Chitekwe, Stanley
2014-11-01
Despite the importance of breast milk, the prevalence of exclusive breastfeeding (EBF) in Nigeria is far lower than what has been recommended for developing countries. Worse still, the practise has been on downward trend in the country recently. This study was aimed at investigating the determinants and geographical variations of EBF in Nigeria. Any intervention programme would require a good knowledge of factors that enhance the practise. A pooled data set from Nigeria Demographic and Health Survey conducted in 1999, 2003, and 2008 were analyzed using a Bayesian stepwise approach that involves simultaneous selection of variables and smoothing parameters. Further, the approach allows for geographical variations at a highly disaggregated level of states to be investigated. Within a Bayesian context, appropriate priors are assigned on all the parameters and functions. Findings reveal that education of women and their partners, place of delivery, mother's age at birth, and current age of child are associated with increasing prevalence of EBF. However, visits for antenatal care during pregnancy are not associated with EBF in Nigeria. Further, results reveal considerable geographical variations in the practise of EBF. The likelihood of exclusively breastfeeding children are significantly higher in Kwara, Kogi, Osun, and Oyo states but lower in Jigawa, Katsina, and Yobe. Intensive interventions that can lead to improved practise are required in all states in Nigeria. The importance of breastfeeding needs to be emphasized to women during antenatal visits as this can encourage and enhance the practise after delivery. PMID:24619227
A Novel Approach of Cardiac Segmentation In CT Image Based On Spline Interpolation
International Nuclear Information System (INIS)
Organ segmentation in CT images is the basis of organ model reconstruction, thus precisely detecting and extracting the organ boundary are keys for reconstruction. In CT image the cardiac are often adjacent to the surrounding tissues and gray gradient between them is too slight, which cause the difficulty of applying classical segmentation method. We proposed a novel algorithm for cardiac segmentation in CT images in this paper, which combines the gray gradient methods and the B-spline interpolation. This algorithm can perfectly detect the boundaries of cardiac, at the same time it could well keep the timeliness because of the automatic processing.
Numerical solution of the controlled Duffing oscillator by semi-orthogonal spline wavelets
Lakestani, M.; Razzaghi, M.; Dehghan, M.
2006-09-01
This paper presents a numerical method for solving the controlled Duffing oscillator. The method can be extended to nonlinear calculus of variations and optimal control problems. The method is based upon compactly supported linear semi-orthogonal B-spline wavelets. The differential and integral expressions which arise in the system dynamics, the performance index and the boundary conditions are converted into some algebraic equations which can be solved for the unknown coefficients. Illustrative examples are included to demonstrate the validity and applicability of the technique.
A cubic B-spline Galerkin approach for the numerical simulation of the GEW equation
Directory of Open Access Journals (Sweden)
S. Battal Gazi Karakoç
2016-02-01
Full Text Available The generalized equal width (GEW wave equation is solved numerically by using lumped Galerkin approach with cubic B-spline functions. The proposed numerical scheme is tested by applying two test problems including single solitary wave and interaction of two solitary waves. In order to determine the performance of the algorithm, the error norms L2 and L∞ and the invariants I1, I2 and I3 are calculated. For the linear stability analysis of the numerical algorithm, von Neumann approach is used. As a result, the obtained findings show that the presented numerical scheme is preferable to some recent numerical methods.
B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms
Bueno, G.; Sánchez, S.; Ruiz, M.
2006-10-01
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
A numerical solution of the Burgers' equation using septic B-splines
Energy Technology Data Exchange (ETDEWEB)
Ramadan, Mohamed A. [Department of Mathematics, Faculty of Science, Menoufia University, Shiben El-Koom (Egypt)] e-mail: mramadan@mailer.eun.eg; El-Danaf, Talaat S. [Department of Mathematics, Faculty of Science, Menoufia University, Shiben El-Koom (Egypt); Abd Alaal, Faisal E.I. [Department of Mathematics, Faculty of Science, Menoufia University, Shiben El-Koom (Egypt)
2005-11-01
In this paper, numerical solutions of the nonlinear Burgers' equation are obtained by a method based on collocation of septic B-splines over finite elements. Applying the Von-Neumann stability analysis, the proposed method is shown to be unconditionally stable. Numerical solutions of the modified Burgers' equation are also obtained by making a simple change of the suggested numerical scheme for the Burgers' equation. The accuracy of the presented method is demonstrated by two test problems. The numerical results are found to be in good agreement with the exact solutions.
Energy Spectra of the Confined Atoms Obtained by Using B-Splines
Institute of Scientific and Technical Information of China (English)
SHI Ting-Yun; BAO Cheng-Guang; LI Bai-Wen
2001-01-01
We have calculated the energy spectra of one- and two-electron atoms (ions) centered in an impenetrable spherical box by variational method with B-splines as basis functions. Accurate results are obtained for both large and small radii of confinement. The critical box radius of confined hydrogen atom is also calculated to show the usefulness of our method. A partial energy degeneracy in confined hydrogen atom is found when the radius of spherical box is equal to the distance at which a node of single-node wavefunctions of free hydrogen atom is located.
Simulating the focusing of light onto 1D nanostructures with a B-spline modal method
Bouchon, P.; Chevalier, P.; Héron, S.; Pardo, F.; Pelouard, J.-L.; Haïdar, R.
2015-03-01
Focusing the light onto nanostructures thanks to spherical lenses is a first step to enhance the field, and is widely used in applications, in particular for enhancing non-linear effects like the second harmonic generation. Nonetheless, the electromagnetic response of such nanostructures, which have subwavelength patterns, to a focused beam can not be described by the simple ray tracing formalism. Here, we present a method to compute the response to a focused beam, based on the B-spline modal method. The simulation of a gaussian focused beam is obtained thanks to a truncated decomposition on plane waves computed on a single period, which limits the computation burden.
Complex wavenumber Fourier analysis of the B-spline based finite element method
Czech Academy of Sciences Publication Activity Database
Kolman, Radek; Plešek, Jiří; Okrouhlík, Miloslav
2014-01-01
Roč. 51, č. 2 (2014), s. 348-359. ISSN 0165-2125 R&D Projects: GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315; GA ČR GPP101/10/P376; GA ČR GA101/09/1630 Institutional support: RVO:61388998 Keywords : elastic wave propagation * dispersion errors * B-spline * finite element method * isogeometric analysis Subject RIV: JR - Other Machinery Impact factor: 1.513, year: 2014 http://www.sciencedirect.com/science/article/pii/S0165212513001479
Numerical solution of elastic wave propagation problems by B-spline finite element method
Czech Academy of Sciences Publication Activity Database
Kolman, Radek; Plešek, Jiří; Okrouhlík, Miloslav; Gabriel, Dušan
Aviero : Universidade de Aveiro, 2012, s. 1-10. ISBN 978-972-99784-2-5. [ECCOMAS Young Investigators Conference. Aveiro (PT), 24.04.2012-27.04.2012] R&D Projects: GA ČR GPP101/10/P376; GA ČR GA101/09/1630; GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315 Institutional research plan: CEZ:AV0Z20760514 Keywords : elastic wave propagation * B-spline finite element method * spurious oscillations Subject RIV: BI - Acoustics
Steady-state solution of the PTC thermistor problem using a quadratic spline finite element method
Directory of Open Access Journals (Sweden)
Bahadir A. R.
2002-01-01
Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.
Beam smoothing and temporal effects
International Nuclear Information System (INIS)
Until recently, and in spite of the introduction of smoothing methods, direct drive laser fusion received lots of setbacks from experiments, this being due to nonlinear and anomalous phenomena. This report deals with a method of analysis which, as self-generated von-Laue gratings, preventing the propagation of laser radiation through the outermost plasma corona, and preventing energy deposition. (TEC). 36 refs., 5 figs
Subsampling in Smoothed Range Spaces
Phillips, Jeff M.; Zheng, Yan
2015-01-01
We consider smoothed versions of geometric range spaces, so an element of the ground set (e.g. a point) can be contained in a range with a non-binary value in $[0,1]$. Similar notions have been considered for kernels; we extend them to more general types of ranges. We then consider approximations of these range spaces through $\\varepsilon $-nets and $\\varepsilon $-samples (aka $\\varepsilon$-approximations). We characterize when size bounds for $\\varepsilon $-samples on kernels can be extended...
Smooth Optimization with Approximate Gradient
d'Aspremont, Alexandre
2005-01-01
We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using spar...
Bayesian phylogeography finds its roots.
Directory of Open Access Journals (Sweden)
Philippe Lemey
2009-09-01
Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.
Bayesian Methods and Universal Darwinism
Campbell, John
2009-12-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.
A S Mugglin; Carlin, B. P.; Zhu, L.; E Conlo
1999-01-01
Geographic information systems (GISs) offer a powerful tool to geographers, foresters, statisticians, public health officials, and other users of spatially referenced regional data sets. However, as useful as they are for data display and trend detection, they typically feature little ability for statistical inference, leaving the user in doubt as to the significance of the various patterns and 'hot spots' identified. Unfortunately, classical statistical methods are often ill suited for this ...
Mobile real-time EEG imaging Bayesian inference with sparse, temporally smooth source priors
DEFF Research Database (Denmark)
Hansen, Lars Kai; Hansen, Sofie Therese; Stahlhut, Carsten
2013-01-01
EEG based real-time imaging of human brain function has many potential applications including quality control, in-line experimental design, brain state decoding, and neuro-feedback. In mobile applications these possibilities are attractive as elements in systems for personal state monitoring and...
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; Smith, Ralph; Mattingly, John
2016-01-01
In this investigation, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 m x 180 m block in an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Due to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple...
Mobile real-time EEG imaging Bayesian inference with sparse, temporally smooth source priors
DEFF Research Database (Denmark)
Hansen, Lars Kai; Hansen, Sofie Therese; Stahlhut, Carsten
EEG based real-time imaging of human brain function has many potential applications including quality control, in-line experimental design, brain state decoding, and neuro-feedback. In mobile applications these possibilities are attractive as elements in systems for personal state monitoring and ...
Bayesian Query-Focused Summarization
Daumé, Hal
2009-01-01
We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.
Numeracy, frequency, and Bayesian reasoning
Directory of Open Access Journals (Sweden)
Gretchen B. Chapman
2009-02-01
Full Text Available Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed.
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
2013-01-01
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Collaborative Kalman Filtration: Bayesian Perspective
Czech Academy of Sciences Publication Activity Database
Dedecius, Kamil
Lisabon, Portugalsko: Institute for Systems and Technologies of Information, Control and Communication (INSTICC), 2014, s. 468-474. ISBN 978-989-758-039-0. [11th International Conference on Informatics in Control, Automation and Robotics - ICINCO 2014. Vien (AT), 01.09.2014-03.09.2014] R&D Projects: GA ČR(CZ) GP14-06678P Institutional support: RVO:67985556 Keywords : Bayesian analysis * Kalman filter * distributed estimation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/AS/dedecius-0431324.pdf
Smooth approximation of data with applications to interpolating and smoothings
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
Prague : Institute of Math ematics, Academy of Sciences of the Czech Republic, 2013 - (Chleboun, J.; Segeth, K.; Šístek, J.; Vejchodský, T.), s. 181-186 ISBN 978-80-85823-62-2. [Programy a algoritmy numerické matematiky /16./. Dolní Maxov (CZ), 03.06.2012-08.06.2012] Institutional support: RVO:67985840 Keywords : smooth approximation Subject RIV: BA - General Math ematics http://users. math .cas.cz/~panm/Panm16/proceedings_final/181_segeth.pdf
Bayesian credible interval construction for Poisson statistics
Institute of Scientific and Technical Information of China (English)
ZHU Yong-Sheng
2008-01-01
The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.
Bayesian Decision Theoretical Framework for Clustering
Chen, Mo
2011-01-01
In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…
Bayesian Statistics for Biological Data: Pedigree Analysis
Stanfield, William D.; Carlton, Matthew A.
2004-01-01
The use of Bayes' formula is applied to the biological problem of pedigree analysis to show that the Bayes' formula and non-Bayesian or "classical" methods of probability calculation give different answers. First year college students of biology can be introduced to the Bayesian statistics.
Using Bayesian Networks to Improve Knowledge Assessment
Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra
2013-01-01
In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…
Nonparametric Bayesian Modeling of Complex Networks
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard; Mørup, Morten
2013-01-01
Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...... for complex networks can be derived and point out relevant literature....
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan
2004-01-01
We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating and ...
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark
We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by eva...
Bayesian analysis of exoplanet and binary orbits
Schulze-Hartung, Tim; Henning, Thomas
2012-01-01
We introduce BASE (Bayesian astrometric and spectroscopic exoplanet detection and characterisation tool), a novel program for the combined or separate Bayesian analysis of astrometric and radial-velocity measurements of potential exoplanet hosts and binary stars. The capabilities of BASE are demonstrated using all publicly available data of the binary Mizar A.
Computational methods for Bayesian model choice
Robert, Christian P.; Wraith, Darren
2009-01-01
In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.
Proteomics Improves the Prediction of Burns Mortality: Results from Regression Spline Modeling
Finnerty, Celeste C.; Ju, Hyunsu; Spratt, Heidi; Victor, Sundar; Jeschke, Marc G.; Hegde, Sachin; Bhavnani, Suresh K.; Luxon, Bruce A.; Brasier, Allan R.; Herndon, David N.
2012-01-01
Prediction of mortality in severely burned patients remains unreliable. Although clinical covariates and plasma protein abundance have been used with varying degrees of success, the triad of burn size, inhalation injury, and age remains the most reliable predictor. We investigated the effect of combining proteomics variables with these three clinical covariates on prediction of mortality in burned children. Serum samples were collected from 330 burned children (burns covering >25% of the total body surface area) between admission and the time of the first operation for clinical chemistry analyses and proteomic assays of cytokines. Principal component analysis revealed that serum protein abundance and the clinical covariates each provided independent information regarding patient survival. To determine whether combining proteomics with clinical variables improves prediction of patient mortality, we used multivariate adaptive regression splines, since the relationships between analytes and mortality were not linear. Combining these factors increased overall outcome prediction accuracy from 52% to 81% and area under the receiver operating characteristic curve from 0.82 to 0.95. Thus, the predictive accuracy of burns mortality is substantially improved by combining protein abundance information with clinical covariates in a multivariate adaptive regression splines classifier, a model currently being validated in a prospective study. PMID:22686201
Xu, ShengYong; Wu, JuanJuan; Zhu, Li; Li, WeiHao; Wang, YiTian; Wang, Na
2015-12-01
Visual navigation is a fundamental technique of intelligent cotton-picking robot. There are many components and cover in the cotton field, which make difficulties of furrow recognition and trajectory extraction. In this paper, a new field navigation path extraction method is presented. Firstly, the color image in RGB color space is pre-processed by the OTSU threshold algorithm and noise filtering. Secondly, the binary image is divided into numerous horizontally spline areas. In each area connected regions of neighboring images' vertical center line are calculated by the Two-Pass algorithm. The center points of the connected regions are candidate points for navigation path. Thirdly, a series of navigation points are determined iteratively on the principle of the nearest distance between two candidate points in neighboring splines. Finally, the navigation path equation is fitted by the navigation points using the least squares method. Experiments prove that this method is accurate and effective. It is suitable for visual navigation in the complex environment of cotton field in different phases.
Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression
Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.
2015-12-01
Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.
DBSR_HF: A B-spline Dirac-Hartree-Fock program
Zatsarinny, Oleg; Froese Fischer, Charlotte
2016-05-01
A B-spline version of a general Dirac-Hartree-Fock program is described. The usual differential equations are replaced by a set of generalized eigenvalue problems of the form (Ha -εa B) Pa = 0, where Ha and B are the Hamiltonian and overlap matrices, respectively, and Pa is the two-component relativistic orbit in the B-spline basis. A default universal grid allows for flexible adjustment to different nuclear models. When two orthogonal orbitals are both varied, the energy must also be stationary with respect to orthonormal transformations. At such a stationary point the off-diagonal Lagrange multipliers may be eliminated through projection operators. The self-consistent field procedure exhibits excellent convergence. Several atomic states can be considered simultaneously, including some configuration-interaction calculations. The program provides several options for the treatment of Breit interaction and QED corrections. The information about atoms up to Z = 104 is stored by the program. Along with a simple interface through command-line arguments, this information allows the user to run the program with minimal initial preparations.
A spectral/B-spline method for the Navier-Stokes equations in unbounded domains
International Nuclear Information System (INIS)
The numerical method presented in this paper aims at solving the incompressible Navier-Stokes equations in unbounded domains. The problem is formulated in cylindrical coordinates and the method is based on a Galerkin approximation scheme that makes use of vector expansions that exactly satisfy the continuity constraint. More specifically, the divergence-free basis vector functions are constructed with Fourier expansions in the θ and z directions while mapped B-splines are used in the semi-infinite radial direction. Special care has been taken to account for the particular analytical behaviors at both end points r=0 and r→∞. A modal reduction algorithm has also been implemented in the azimuthal direction, allowing for a relaxation of the CFL constraint on the timestep size and a possibly significant reduction of the number of DOF. The time marching is carried out using a mixed quasi-third order scheme. Besides the advantages of a divergence-free formulation and a quasi-spectral convergence, the local character of the B-splines allows for a great flexibility in node positioning while keeping narrow bandwidth matrices. Numerical tests show that the present method compares advantageously with other similar methodologies using purely global expansions
International Nuclear Information System (INIS)
A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data
RGB color calibration for quantitative image analysis: the "3D thin-plate spline" warping approach.
Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado
2012-01-01
In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data. PMID:22969337
Marghany, Maged
2014-06-01
A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.
B-splines as a Tool to Solve Constraints in Non-Hydrostatic Forecast Model
Subias, Alvaro
2016-01-01
Finite elements has been proven to be an useful tool to discretize the vertical coordinate in the hydrostatic forecast models allowing to define model variables in full levels so that no staggering is needed. In the non-hydrostatic case a constraint in the vertical operators appears (called C1) that does not allow to reduce the set of semi-implicit linear equations to a single equation in one variable as in the analytic case. Recently vertical finite elements based in B-splines have been used with an iterative method to relax the C1 constraint. In this paper we want to develop properly some representations of vertical operators in terms of B-splines in order to keep the C1-constraint. An invertibility relation between integral and derivative operators between vertical velocity and vertical divergence is also presented. The final scope of this paper is to provide a theoretical framework of development of finite element vertical operators to be implemented in the nh-Harmonie model
7 CFR 51.1159 - Smooth texture.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Smooth texture. 51.1159 Section 51.1159 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Standards for Grades of Florida Oranges and Tangelos Definitions § 51.1159 Smooth texture. Smooth...
7 CFR 51.768 - Smooth texture.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Smooth texture. 51.768 Section 51.768 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Standards for Grades of Florida Grapefruit Definitions § 51.768 Smooth texture. Smooth texture means...
A SAS IML Macro for Loglinear Smoothing
Moses, Tim; von Davier, Alina
2011-01-01
Polynomial loglinear models for one-, two-, and higher-way contingency tables have important applications to measurement and assessment. They are essentially regarded as a smoothing technique, which is commonly referred to as loglinear smoothing. A SAS IML (SAS Institute, 2002a) macro was created to implement loglinear smoothing according to…
Calcium dynamics in vascular smooth muscle
Amberg, Gregory C.; Navedo, Manuel F.
2013-01-01
Smooth muscle cells are ultimately responsible for determining vascular luminal diameter and blood flow. Dynamic changes in intracellular calcium are a critical mechanism regulating vascular smooth muscle contractility. Processes influencing intracellular calcium are therefore important regulators of vascular function with physiological and pathophysiological consequences. In this review we discuss the major dynamic calcium signals identified and characterized in vascular smooth muscle cells....
2nd Bayesian Young Statisticians Meeting
Bitto, Angela; Kastner, Gregor; Posekany, Alexandra
2015-01-01
The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...
Trivariate Local Lagrange Interpolation and Macro Elements of Arbitrary Smoothness
Matt, Michael Andreas
2012-01-01
Michael A. Matt constructs two trivariate local Lagrange interpolation methods which yield optimal approximation order and Cr macro-elements based on the Alfeld and the Worsey-Farin split of a tetrahedral partition. The first interpolation method is based on cubic C1 splines over type-4 cube partitions, for which numerical tests are given. The second is the first trivariate Lagrange interpolation method using C2 splines. It is based on arbitrary tetrahedral partitions using splines of degree nine. The author constructs trivariate macro-elements based on the Alfeld split, where each tetrahedron
BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.
Khakabimamaghani, Sahand; Ester, Martin
2016-01-01
The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data. PMID:26776199
Mortezapouraghdam, Zeinab; Wilson, Robert C; Schwabe, Lars; Strauss, Daniel J
2016-01-01
We study the effect of long-term habituation signatures of auditory selective attention reflected in the instantaneous phase information of the auditory event-related potentials (ERPs) at four distinct stimuli levels of 60, 70, 80, and 90 dB SPL. The analysis is based on the single-trial level. The effect of habituation can be observed in terms of the changes (jitter) in the instantaneous phase information of ERPs. In particular, the absence of habituation is correlated with a consistently high phase synchronization over ERP trials. We estimate the changes in phase concentration over trials using a Bayesian approach, in which the phase is modeled as being drawn from a von Mises distribution with a concentration parameter which varies smoothly over trials. The smoothness assumption reflects the fact that habituation is a gradual process. We differentiate between different stimuli based on the relative changes and absolute values of the estimated concentration parameter using the proposed Bayesian model. PMID:26858631
Comparison of some nonlinear smoothing methods
International Nuclear Information System (INIS)
Due to the poor quality of many nuclear medicine images, computer-driven smoothing procedures are frequently employed to enhance the diagnostic utility of these images. While linear methods were first tried, it was discovered that nonlinear techniques produced superior smoothing with little detail suppression. We have compared four methods: Gaussian smoothing (linear), two-dimensional least-squares smoothing (linear), two-dimensional least-squares bounding (nonlinear), and two-dimensional median smoothing (nonlinear). The two dimensional least-squares procedures have yielded the most satisfactorily enhanced images, with the median smoothers providing quite good images, even in the presence of widely aberrant points
Income and Consumption Smoothing among US States
DEFF Research Database (Denmark)
Sørensen, Bent; Yosha, Oved
We quantify the amount of cross-sectional income and consumption smoothing achieved within subgroups of states, such as regions or clubs, e.g. the club of rich states. We find that there is much income smoothing between as well as within regions. By contrast, consumption smoothing occurs mainly...... within US regions. Since a considerable fraction of shocks to gross state product are smoothed within regions, we conclude that existing markets achieve a substantial fraction of the potential welfare gains from interstate income and consumption smoothing. Nonetheless, non-negligible welfare gains may be...
Smooth Adaptation by Sigmoid Shrinkage
Directory of Open Access Journals (Sweden)
Atto AbdourrahmaneM
2009-01-01
Full Text Available This paper addresses the properties of a subclass of sigmoid-based shrinkage functions: the non zeroforcing smooth sigmoid-based shrinkage functions or SigShrink functions. It provides a SURE optimization for the parameters of the SigShrink functions. The optimization is performed on an unbiased estimation risk obtained by using the functions of this subclass. The SURE SigShrink performance measurements are compared to those of the SURELET (SURE linear expansion of thresholds parameterization. It is shown that the SURE SigShrink performs well in comparison to the SURELET parameterization. The relevance of SigShrink is the physical meaning and the flexibility of its parameters. The SigShrink functions performweak attenuation of data with large amplitudes and stronger attenuation of data with small amplitudes, the shrinkage process introducing little variability among data with close amplitudes. In the wavelet domain, SigShrink is particularly suitable for reducing noise without impacting significantly the signal to recover. A remarkable property for this class of sigmoid-based functions is the invertibility of its elements. This propertymakes it possible to smoothly tune contrast (enhancement, reduction.
Cheng, J; 10.1613/jair.764
2011-01-01
Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AIS-BN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in finite-dimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from different stages of the algorithm. We tested the performance of the AIS-BN algorithm along with two state of the art general purpose sampling algorithms, likelihood weighting (Fung and Chang...
Nieto, Paulino José García; Antón, Juan Carlos Álvarez; Vilán, José Antonio Vilán; García-Gonzalo, Esperanza
2014-10-01
The aim of this research work is to build a regression model of the particulate matter up to 10 micrometers in size (PM10) by using the multivariate adaptive regression splines (MARS) technique in the Oviedo urban area (Northern Spain) at local scale. This research work explores the use of a nonparametric regression algorithm known as multivariate adaptive regression splines (MARS) which has the ability to approximate the relationship between the inputs and outputs, and express the relationship mathematically. In this sense, hazardous air pollutants or toxic air contaminants refer to any substance that may cause or contribute to an increase in mortality or serious illness, or that may pose a present or potential hazard to human health. To accomplish the objective of this study, the experimental dataset of nitrogen oxides (NOx), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3) and dust (PM10) were collected over 3 years (2006-2008) and they are used to create a highly nonlinear model of the PM10 in the Oviedo urban nucleus (Northern Spain) based on the MARS technique. One main objective of this model is to obtain a preliminary estimate of the dependence between PM10 pollutant in the Oviedo urban area at local scale. A second aim is to determine the factors with the greatest bearing on air quality with a view to proposing health and lifestyle improvements. The United States National Ambient Air Quality Standards (NAAQS) establishes the limit values of the main pollutants in the atmosphere in order to ensure the health of healthy people. Firstly, this MARS regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the main pollutants in the Oviedo urban area. Secondly, the main advantages of MARS are its capacity to produce simple, easy-to-interpret models, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, on the basis of
Bayesian networks in educational assessment
Almond, Russell G; Steinberg, Linda S; Yan, Duanli; Williamson, David M
2015-01-01
Bayesian inference networks, a synthesis of statistics and expert systems, have advanced reasoning under uncertainty in medicine, business, and social sciences. This innovative volume is the first comprehensive treatment exploring how they can be applied to design and analyze innovative educational assessments. Part I develops Bayes nets’ foundations in assessment, statistics, and graph theory, and works through the real-time updating algorithm. Part II addresses parametric forms for use with assessment, model-checking techniques, and estimation with the EM algorithm and Markov chain Monte Carlo (MCMC). A unique feature is the volume’s grounding in Evidence-Centered Design (ECD) framework for assessment design. This “design forward” approach enables designers to take full advantage of Bayes nets’ modularity and ability to model complex evidentiary relationships that arise from performance in interactive, technology-rich assessments such as simulations. Part III describes ECD, situates Bayes nets as ...
Quantum Bayesianism at the Perimeter
Fuchs, Christopher A
2010-01-01
The author summarizes the Quantum Bayesian viewpoint of quantum mechanics, developed originally by C. M. Caves, R. Schack, and himself. It is a view crucially dependent upon the tools of quantum information theory. Work at the Perimeter Institute for Theoretical Physics continues the development and is focused on the hard technical problem of a finding a good representation of quantum mechanics purely in terms of probabilities, without amplitudes or Hilbert-space operators. The best candidate representation involves a mysterious entity called a symmetric informationally complete quantum measurement. Contemplation of it gives a way of thinking of the Born Rule as an addition to the rules of probability theory, applicable when one gambles on the consequences of interactions with physical systems. The article ends by outlining some directions for future work.
Hedging Strategies for Bayesian Optimization
Brochu, Eric; de Freitas, Nando
2010-01-01
Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It is able to do this by sampling the objective using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We describe the method, which we call GP-Hedge, and show that this method almost always outperforms the best individual acquisition function.
Nonparametric Bayesian inference in biostatistics
Müller, Peter
2015-01-01
As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...
On Bayesian System Reliability Analysis
International Nuclear Information System (INIS)
The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person's state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs
Elvira, Clément; Dobigeon, Nicolas
2015-01-01
Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as digital communications. Anti-sparse regularization can be naturally expressed through an $\\ell_{\\infty}$-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear inverse problem, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The seco...
State Information in Bayesian Games
Cuff, Paul
2009-01-01
Two-player zero-sum repeated games are well understood. Computing the value of such a game is straightforward. Additionally, if the payoffs are dependent on a random state of the game known to one, both, or neither of the players, the resulting value of the game has been analyzed under the framework of Bayesian games. This investigation considers the optimal performance in a game when a helper is transmitting state information to one of the players. Encoding information for an adversarial setting (game) requires a different result than rate-distortion theory provides. Game theory has accentuated the importance of randomization (mixed strategy), which does not find a significant role in most communication modems and source coding codecs. Higher rates of communication, used in the right way, allow the message to include the necessary random component useful in games.
International Nuclear Information System (INIS)
We report a fully ab initio implementation of exterior complex scaling in B-splines to evaluate total, singly and triply differential cross sections in double photoionization problems. Results for He and H2 double photoionization are presented and compared with experiment
International Nuclear Information System (INIS)
For elastic scattering and electron impact excitation of the 4p5s states in Kr, we present independently normalized, absolute angle-differential cross sections over the entire angular range (0° − 180°). Excellent agreement is obtained between the present experimental data and theoretical predictions from a fully relativistic B-spline R-matrix (close-coupling) model.
Genetic parameters were estimated with REML for individual test-day milk, fat, and protein yields and SCS with a random regression cubic spline model. Test-day records of Holstein cows that calved from 1994 through early 1999 were obtained from Dairy Records Management Systems in Raleigh, North Car...
Cooperative extensions of the Bayesian game
Ichiishi, Tatsuro
2006-01-01
This is the very first comprehensive monograph in a burgeoning, new research area - the theory of cooperative game with incomplete information with emphasis on the solution concept of Bayesian incentive compatible strong equilibrium that encompasses the concept of the Bayesian incentive compatible core. Built upon the concepts and techniques in the classical static cooperative game theory and in the non-cooperative Bayesian game theory, the theory constructs and analyzes in part the powerful n -person game-theoretical model characterized by coordinated strategy-choice with individualistic ince